The partly parametric and partly nonparametric additive risk model

Aalen’s linear hazard rate regression model is a useful and increasingly popular alternative to Cox’ multiplicative hazard rate model. It postulates that an individual has hazard rate function \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$h(s)=z_1\alpha _1(s)+\cdots +z_r\alpha _r(s)$$\end{document}h(s)=z1α1(s)+⋯+zrαr(s) in terms of his covariate values \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$z_1,\ldots ,z_r$$\end{document}z1,…,zr. These are typically levels of various hazard factors, and may also be time-dependent. The hazard factor functions \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha _j(s)$$\end{document}αj(s) are the parameters of the model and are estimated from data. This is traditionally accomplished in a fully nonparametric way. This paper develops methodology for estimating the hazard factor functions when some of them are modelled parametrically while the others are left unspecified. Large-sample results are reached inside this partly parametric, partly nonparametric framework, which also enables us to assess the goodness of fit of the model’s parametric components. In addition, these results are used to pinpoint how much precision is gained, using the parametric-nonparametric model, over the standard nonparametric method. A real-data application is included, along with a brief simulation study.


Introduction and summary
Suppose individual i has observable covariate values z i,1 , . . . , z i,r and that these are thought to influence the probability distribution of his life time T i . The most usual way of modelling this is through Cox' regression model for the hazard rate h i (s), which takes this to be of the form h 0 (s) exp(β 1 z i,1 + · · · + β r z i,r ) for certain parameters β 1 , . . . , β r . Aalen's linear hazard rate regression model has over the past few decades become a useful and popular alternative. It postulates that individual i has hazard rate function h i (s) = h(s | z i ) = z i,1 α 1 (s) + · · · + z i,r α r (s) = z t i α(s), (1.1) where the α j (s) functions are unknown. The observed data comprise triples (t i , δ i , z i ), for individuals i = 1, . . . , n, where δ i is an indicator for non-censoring. See Aalen (1980Aalen ( , 1989Aalen ( , 1993 and the relevant chapters of the classic monographs Andersen, Borgan, Gill and Keiding (1993, Ch. VII) and Aalen, Borgan and Gjessing (2008, Ch. VI) for general discussion of the (1.1) model, for the most usual estimation methods and their properties, and for applications to various datasets. We comment below on various extensions of and further developments for the basic Aalen model (1.1). The present paper is yet another contribution to this literature, taking some of the regressor functions parametric and the others nonparametric. One may think of z j = z i, j as the level of hazard factor no. j for the individual, and the α j (s) function as the associated hazard factor function, or regressor function. Often, the first covariate is the constant 1, and the others are scaled such that zero is the minimum value when the covariate is discrete, or the mean value when the covariate is continuous, in which case one typically also scales the covariate by the inverse of the empirical standard deviation. In such cases equation (1.1) models hazard rate as the common α 1 (s) plus excess contributions due to hazard factors z 2 , . . . , z r . The covariates may also depend upon time as long as they do so in a previsible or predictable fashion; the covariate values z i (s) at time s should be known just prior to time s. It suffices that the z i (s) are left-continuous functions of what has been observed on [0, s], i.e., they must not depend on information becoming available after s.
Importantly, the Aalen additive model is typically estimated nonparametrically, where there are no further assumptions beyond positivity and continuity of z t i α(s) of (1.1) for all z i in the support of the distribution of covariates. For the typical application, nonparametric estimates of the cumulative hazard factor functions A j (t) = t 0 α j (s) ds for j = 1, . . . , r (1.2) are computed and displayed, supplemented with variability estimates. This is used to suggest conclusions about relative influence over time of the different covariate factors. The survival curves for given individuals may also be read off from the modelling here, and if an individual has covariate vector z = (z 1 , . . . , z r ) t , not changing over time, the survival curve is There has been considerable further research, extending and finessing aspects of the basic Aalen model (1.1)-(1.3), see e.g. Martinussen and Scheike (2007, 2002a, b, 2009b. McKeague and Sasieni (1994) studies a version where some of the α j (s) functions are taken constant, the other taken nonparametric; the present paper extends these ideas and methods further. Stoltenberg (2020) studies the Aalen model in the presence of a cure fraction. Borgan et al. (2007) extend certain features of the model to encompass recurrent event data and to reflect between-subject heterogeneity and missing data. Also of relevance for the present paper, Jullum and Hjort (2017) develop general model selection methods for choosing among parametric and nonparametric candidate models; and Jullum and Hjort (2019) study the possible efficiency gains in specifying a parametric baseline hazard in the Cox regression model.
In applications, the researcher might have firm prior opinions about the functional form of the effect of certain covariates, while being less informed about others. This motivates a framework where some of the hazard factor functions, say the first p, are specified parametrically, while the remaining q = r − p continues to be left unspecified, beyond the basic requirement that the (1.1) quantity is nonnegative across all expected covariate values, for all times s. Writing z i,(1) for the first p components and z i, (2) for the remaining q of z i , with a similar block division of α(s) into α (1) (s, θ) and α (2) (s), the model becomes for i = 1, . . . , n. Here θ is the collection of parameters used to describe the first p hazard factor functions, which would typically take the form α j (s, θ j ) for j = 1, . . . , p.
The covariates in (1.4) are not dependent on time. As discussed in relation to the Aalen model of (1.1), an extension to time-varying covariates requires only minor modifications to the theory related to predictability and linear independence of the the covariates at all time points. To ease the presentation we stick to covariates that are constant in time.
Our quest is two-fold. We aim first at developing sound estimation methods for the unknowns of the (1.4) model, along with large-sample theory describing the behaviour of these estimators. Secondly, accompanying goodness-of-fit measures will be constructed, to assess the adequacy of the parametric components. To reach these goals our paper proceeds as follows.
We start in Sect. 2 by presenting the natural methods and results for the purely nonparametric and the purely parametric versions of model (1.4), before going on to our favoured estimation strategies for the cases with both parametric and nonparametric components in Sect. 3. In Sect. 4 we derive the required large-sample normality results, for both the parametric and nonparametric parts, enabling statistical inference. A special case of the class of methods we propose is asymptotically optimal; the details concerning such statements have independent interest, and are summarised in Appendix A. Part of the benefit of using parametric rather than nonparametric components in the (1.1) model is that it leads to better precision, again for both the parametric and nonparametric components; this is assessed and illustrated in Appendix B.
Then in Sect. 5 we construct goodness-of-fit monitoring processes, which in particular lead to classes of chi-squared tests. In Sect. 6 the finite-sample behaviour of our estimation and inference methods is illustrated through a simulation study. We also present an empirical application, related to n = 312 primary biliary cirrhosis patients in a double-blind randomised study, comparing our methods to those associated with the fully nonparametric Aalen estimator. These applications illustrate the usefulness of the our methods, and showcase the gains in efficiency that are achieved by going partly parametric partly nonparametric, as opposed to fully nonparametric. Our article ends with a list of remarks, some pointing to further research work, in Sect. 7.

The fully nonparametric and fully parametric cases
Here we establish some notation and briefly describe the estimators A 1 , . . . , A r in typical use for the full nonparametric model, in Sects. 2.1-2.2. These will be the basis for fitting the parametric and nonparametric components in later sections. We also go through the natural estimation methods for the special case of (1.4) where all components are specified parametrically, in Sect. 2.3.
We first go through and comment on certain assumptions of convenience, which will be taken to hold throughout our article.
Assumptions 1 (1) Ergodicity: All averages n −1 n i=1 φ(z i ) converge to appropriate limits as n grows. These limits may be interpreted as means with respect to the covariate distribution. This assumption facilitates the mathematical development and makes it easier to give precise statements about e.g. limit distributions of estimators. The largesample theory is, however, developed conditionally on the observed covariate values, so all randomness lies in (T i , δ i ) given these. (2) Finite time window: Individuals are followed over a fixed finite time interval, say [0, τ ]. This is not a restriction in practice. Most results may be extended to the case of τ = ∞, under appropriate assumptions on the censoring mechanism. We shall be content to work with the finite time horizon, with which the martingale limit theory works more smoothly and with fewer technicalities. (3) Independent censoring and finite variances: The censoring mechanism involved, leading to data (t i , δ i ), are not related to the survival mechanism generating the hazard rates. Furthermore, the r × r matrix function n −1 n i=1 I (T i ≥ s)z i z t i tends in probability to a matrix with full rank r , for each s ∈ [0, τ ]. This means in particular that the censoring distribution does not have a support strictly smaller than [0, τ ], and also that enough linearly independent covariate vectors z i are present in the risk set at time s, with increasing n. (4) Smooth parametric components: The α j (s, θ) of (1.4) are smooth in θ , with continuous first order derivatives α * j (s, θ) and second order derivatives α * * j (s, θ), for θ in a neighbourhood around the true parameter θ 0 .

The general integrated weighted least squares estimators
The data consist of triples (t i , δ i , z i ) for each of n individuals, where t i is the life-time, possibly right-censored, δ i an indicator for non-censoring, and z i the r -dimensional covariate vector, as above. Let N i (t) = I {t i ≤ t, δ i = 1} and Y i (t) = I {t i ≥ t} be the counting process and at risk indicator for individual i, and introduce the martingale the second term here being martingale noise with mean zero. Here we have allowed certain weight functions w i (s) to be used. They are taken to be pre-visible functions (their values at time s are known at time s−), and the most often used choice is that It is assumed that at least r of the z i at risk at time s are linearly independent, so that G n (s) has full rank.
These estimators have well-studied properties, see Aalen, Borgan and Gjessing (2008, Ch. VI). In particular, large-sample results are available via the calculus of counting processes and martingales. We review briefly here results, and introduce notation which will be needed in the development to follow. Consider It follows from the regularity conditions described in Assumptions 1 that there are well-defined limits in probability, as n increases, where G and H are full-rank r × r matrix functions. One finds where U is a Gaußian martingale with variance level Var dU (s) = dH (s). In particular, This limiting variance may be estimated from data as There are a couple of options for estimating dH (s) consistently, including In our empirical work we have used the second option.

Optimal nonparametric estimation
One may show, e.g. exploiting a parallel to the theory of weighted least squares, that the theoretically optimal weights, minimising G n (s) −1 dH n (s)G n (s) −1 , are (2.5) The resulting minimum variance corresponds to F n (s) −1 ds, where . (2.6) In practice one needs to estimate these, say with w i (s) = 1/{z t i α(s)}, leading tȏ One may show that √ n(Ȃ− A), with estimated optimal weights, has the same limit distribution with optimal weights, provided the α(s) estimator satisfies certain uniform consistency conditions, see Huffer and McKeague (1991). Candidates for α(s) include kernel smoothing of the plain Aalen estimators, which use w i (s) = 1, and local linear likelihood smoothing. The limit distribution variance for this optimalȂ estimator is t 0 F(s) −1 ds, which is the minimum over all t 0 G(s) dH (s) G(s) −1 . Here F(s) is the limit in probability of F n (s) of (2.6), assumed to exist.
While F(s) −1 ds may be somewhat smaller in size than the most often used G(s) −1 dH (s)G(s) −1 , with weights w i (s) = 1, there are additional variability contributions associated with this estimator, which therefore is not automatically better than the Aalen ones for finite n. Our default choice, for empirical work, is therefore to use the 'plain weights' w i (s) = 1 in (2.2).

The fully parametric model
Consider now the fully parametric model where α j (s) = α j (s, θ) for j = 1, . . . , r . We study the maximum likelihood estimator θ , maximising the log-likelihood, which may be written Here τ is an upper bound for the period of observation, assumed finite, see Assumptions 1. Let α * (s, θ) = ∂α(s, θ)/∂θ be the r × m matrix of partial derivatives ∂α j (s, θ)/∂θ k , where m is the length of the parameter vector θ . Assuming the model holds, with θ 0 the true parameter value, let 0 = τ 0 α * (s, θ 0 ) t F(s)α * (s, θ 0 ) ds, with F(s) the limit in probability of F n (s) of (2.6). We then have the following.

Proposition 2.1 Under standard regularity conditions, including those described in
Assumptions 1, and supposing the model holds for a true parameter θ 0 , an inner point of the parameter space, Proof The proof follows the lines of Borgan (1984) and Hjort (1986Hjort ( , 1992. We need the first and second derivatives of z t i α(s, θ), and write these respectively as α * (s, θ) t z i , of dimension 1 × m, and r j=1 z i, j α * * j (s, θ), where α * * j (s, θ) is the m × m matrix of second order derivatives of α j (s, θ). The first derivative of n is which is a martingale, evaluated at τ , with variance process It follows that n −1/2 u n (θ 0 ) tends to a N m (0, 0 ) random variable, under model conditions. We next need to work with the second order derivative i n (θ ) of n , to show that −n −1 i n (θ ) = J n + o pr (1) at the model. We find Using the martingales again, and simplifying, shows that At the true value θ 0 , the first term is equal to τ 0 (α * ) t F n α * ds = J n , while the second goes to zero in probability, by an application of Lenglart's inequality, see e.g. Andersen et al. (1993, p. 86). Some further analysis, similar in nature to material in Hjort (1992, Sections 2-3), leads in the end to

Estimation in the parametric and nonparametric model
In this section we describe estimation methods for the parametric-nonparametric model (1.4). These involve a Step (a) for estimating the parametric parts, the A (1) (t, θ), with these also being used in a Step (b) for estimating the nonparametric parts. In particular, our estimators for these A (2) (t) utilise the parametric structure for A (1) (t, θ), and are not identical to the direct Aalen estimators A (2) (t); the point is to utilise the parametric knowledge, for increased precision.

Estimating the parametric part
Our preferred version of Step (a) is as follows. It is desirable to find values of θ which makes the integrated, weighted quadratic form as small as possible. Here τ is an upper time point, which could be chosen by convenience for the application at hand, while the V n (s) is a full-rank symmetric p× p matrix weight function. This minimisation cannot be directly achieved, since the quadratic form depends on the unknown functions. Upon multiplying out and omitting the one term which does not involve the parameters, however, the empirical version emerges. Here d A (1) (s) contains the first p components of the nonparametric d A(s) of (2.2), and we let θ be the minimiser of the criterion function C n (θ ).
Note that the V n (s) may very well be data-dependent. We typically have such in mind where V n (s) → pr V (s) for a suitable limit matrix function; see the following section, where we also exhibit a particular choice of V n (s) which leads to optimal performance for large n. This involves the nontrivial estimates 1/{z t i α(s)}, however, discussed in connection with (2.5)-(2.6), and are often too unstable for small and moderate n. Our default choice is the simpler It has a well-defined limit in probability function V (s) by Assumptions 1. For the simplest case of having the parametric hazard components constant, with α (1, j) (s, θ) = θ j for j = 1, . . . , p, the above yields These are the best constants, seen as yielding approximations θ j t to the nonparametric With our default weight function in (3.2), the estimator θ is similar to the estimator proposed by McKeague & Sasieni (1994, Eq. (2.4), p. 503), but not identical to it. To obtain their estimator, McKeague and Sasieni solve a system of equations obtained by appropriately modifying the score function, obtaining an estimating equation linear in θ (their β). Similar techniques may be used with more general parametric hazard functions, thus possibly replacing the C n (θ ) we work with here with a slightly different criterion function.

Backfitting to re-estimate the nonparametric part
We now describe a version of Step (b), after Step (a) has yielded parametric estimates α j (s, θ) for j = 1, . . . , p as above. Consider the nonparametric part of equation ( A more precise definition of the martingales involved, now that work is carried out inside the (1.4) framework, reads with θ 0 the true parameter. To utilise the parametric knowledge, so as to reach better estimation precision for the nonparametric components, this encourages using Note that the method outlined here is really a class of procedures, in that different weight schemes may be used in (3.4), and also different weight functions V n when minimising the C n (θ ) function to obtain the θ estimator. In (3.4), we may e.g. use vanilla weights w i (s) = 1, or the more sophisticated w i (s) of Sect. 2.2. An asymptotically optimal scheme is found in the next section.

Large-sample behaviour and optimality
Here we demonstrate limiting normality for the estimators of Sect. 3, i.e. θ minimising C n (θ ) of (3.1) and A (2) (t) of (3.4), with precise formulae for the limit distribution variances and covariances. Results are derived under model conditions (1.4), with θ 0 denoting the true parameter for the parametric parts

Large-sample theory for the parametric part
For studying our estimators we also need the function Q(s), defined by Proof Setting the derivative of the criterion function (3.1) equal to zero gives the estimation equation This redefines θ , under appropriate conditions, as an M-type estimator; see Hjort (1985, Section 4), Hjort (1992, Section 5). Note that which at the true θ 0 is a zero-mean normal with variance matrix . A little more work gives expressions for the m × m matrix n (θ ), containing minus the derivative of S n (θ ) with respect to the m parameters, as Here the second matrix has components which are linear combinations of smooth and bounded functions of θ times the p components of d The point is that all terms of E n (θ ) go to zero in probability, under model conditions, at θ 0 , so n (θ 0 ) → pr . This leads to n → d −1 S, proving the claim.
Asking for the best performance under model conditions, at least for large n, is the same as choosing the p × p matrix function V to minimise −1 −1 . This is achieved when V (s) is taken proportional to Q(s) −1 , assuming Q(s) to have full rank p × p across the range [0, τ ]. Then = = 0 , say, with minimum variance matrix being equal to To prove that this is the minimum size matrix, let Z (t) be a Gaußian martingale with incremental variance Var dZ (s) = Q(s) ds, and consider the random vectors In usual block notation, 11 − 12 −1 22 21 must then be nonnegative definite. This is equivalent to the minimisation claim made.
The next question is how one can make −1 0 as small as possible. But this is the same as minimising over Q(s) ds = [G(s) −1 dH (s) G(s) −1 ] 11 , which we have seen takes place for the optimal weights (2.5), and for which we have Q(s) = [F(s) −1 ] 11 = F 11 (s), say. The asymptotically optimal method is accordingly to use as V n (s) a matrix function which converges in probability, if possible, to V (s) = F 11 (s) −1 . But this is achieved via where F n is as F n of (2.6), but with weights z t i α(s) inserted. We may conclude that this method gives the optimal performance for large n, with limit variance matrix It is in fact not possible to improve on this, with any other estimation method. That this is indeed so is detailed in Appendix A.

Large-sample theory for the nonparametric part
To study the behaviour of A p+1 , . . . , A p+q we need the q × m function which under the mild general conditions stated previously has a limit in probability function φ(s).
Proposition 4.2 Assume that regularity conditions of Proposition 4.1 are in force, and let = −1 S be the limit variable for n = √ n( θ − θ 0 ). Then there is process convergence Here X n . = X n means that the difference tends to zero in probability. The claim follows from general theory of convergence of processes in the D[0, τ ] space.
Propositions 4.1 and 4.2 give clear descriptions of the large-sample behaviour of our parametric and nonparametric estimators, separately. We also need the joint limiting distribution of θ and A (2) (t), for reaching inference for quantities involving both parts, like the survival curves S(t | z) with A(t | z) = z 1 A 1 (t, θ) + · · · + z p A p (t, θ) + z p+1 A p+1 (t) + · · · + z p+q A p+q (t). Here we give details for the joint limiting distribution of A (1) (t, θ) and A (2) (t). We indeed have with formulae for the variance matrix to follow. In (4.4), the first term is a Gaußian martingale with variance t 0 G −1 22 dH 22 G −1 22 , while the second term also is normal, with a variance which can be written down via Proposition 4.1. By combining Propositions 4.1 and 4.2, and applying the delta method, we reach (4.5). First, is the variance of (4.4). To this end, for the covariance between the two terms in (4.4), we have so that the full variance of the right hand side of (4.4) is Third, the lower off-diagonal block in the covariance matrix in (4.5) is where we use (4.6), and 12 (t) = 21 (t) t . It is clear how to estimate these covariance matrices, for example, when the traditional Aalen estimator weights w i (s) = 1 are being used. It is interesting to study the special case where V (s) = F 11 (s) −1 , which by the above leads to optimal large-sample performance. Then the two terms of the limit process are in fact independent. This follows from dH (s) = F(s) ds and G = F. For this situation, therefore, the covariance function for the limit process in (4.4) may be written where .

Assessing goodness of fit
We have investigated the parametric-nonparametric model (1.4), constructed estimators α j (s, θ) for the parametric components, and derived large-sample properties, leading to inference methods for all relevant quantities, with better precision than for the traditional nonparametric methods. The underlying assumption for these good results is that the parametric structure actually holds. In this section we construct monitoring processes and related tests to assess adequacy of the parametric part.

Goodness of fit processes
For each j we may consider monitoring processes of the type √ n t 0 K n, j (s){d A j (s)− α j (s, θ) ds}, where K n, j is a suitable weight function. More generally, let Proposition 5.1 Assume that the K n,i j functions are previsible and converge uniformly in probability to K i j functions over [0, τ ], that regularity conditions associated with the two propositions of Section 4 are in force, and that the parametric model is true for the hazard functions α j (s, θ), for j = 1, . . . , p. Then there is process convergence in the space D([0, τ ] p ), equipped with the Skorokhod product topology, and Proof Letting as before α * j (s, θ) = ∂α j (s, θ)/∂θ, and using the representation (2.4) with dU n (s) of (2.3), the essence here is that for j = 1, . . . , p, by Taylor analysis. It follows from methods and results of Sect. 4 that there is joint distributional convergence of G n (s) −1 dU n (s) and With the weight functions K n (s) converging uniformly in probability to the K (s), we reach R n → d R via details and methods similar to those used in Hjort (1990, Sections 3-4), for a similar though somewhat different setup.
The limiting processes R 1 , . . . , R p are jointly Gaußian with zero mean. To find their covariance functions we utilise the structure found in the proof of the proposition.
The W process is a normal martingale with independent increments, and Var dW (s) = Q(s) ds, as before, see (4.1). Then writing also (t) for the p × m matrix t 0 K α * (1) ds. Taking the mean of using the zero-mean independent increments property of W , gives (1) ds. The (5.1) framework involves a full matrix of weight functions and gives p processes for simultaneous monitoring. We note the special case of a single p × 1 weight function K n = (K n,1 , . . . , K n, p ) t , where a result can be read off from those above, by considering only one monitoring process. So, the linear combination of compared increments (1) ds. If in particular K n = (0, . . . , K n, j , . . . , 0) t , we are led to the separate monitoring processes This R n, j (t) tends in distribution to With calculations similar to those above, the covariance function cov{R j (t 1 ), R j (t 2 )} may be expressed as

Chi-squared tests
Divide the time observation period [0, τ ] into time windows I = (c −1 , c ] for = 1, . . . , k, where c 0 = 0 and c k = τ . For each window we may compute the pvariate increment R n (I ) = R n (c )− R n (c −1 ). From Proposition 5.1, the collection of these tends in distribution to that of R(I ) = R(c ) − R(c −1 ), which under the model hypothesis is zero-mean multinormal and with a covariance structure which might be calculated from the above results.
We may somewhat grandly test the full simultaneous parametric hypothesis that all α j (s, θ) components hold, via the p-dimensional R n (I j ). Here we outline simpler but natural strategies connected to studying one α j (s, θ) at the time. For this we use R n, j (t) → d R j (t), as per (5.2)-(5.3), for a given choice of weight function K n, j (s). We compute increments R n, j, = R n, j (I ), and these tend jointly to the vector of increments R j (I ) = {R j (c ) − R j (c −1 )}. This is a zero-mean multinormal, say N k (0, j ), with j the appropriate covariance matrix flowing from the covariance function (5.4). There are several ways in which we may now test the α j (s, θ) hypothesis. In particular, where n, j is the vector of the R n, j , tending in distribution to j , the vector of the R j (I ), and j a consistent estimator of the k × k matrix j .

Other tests
It is in principle easy to construct other test statistics based on the monitoring processes R n of (5.1), although their exact or limiting null distributions might be hard to tabulate or assess. There are ways of approximating such distributions, however, as we now illustrate. Consider R n, j of (5.2), for a suitable K n, j , and define R n, j = max t≤τ |R n, j (t)| for j = 1, . . . , p.
These Kolmogorov-Smirnov type tests have well-defined limit distributions, namely max t≤τ |R j (t)| with R j (t) as in (5.3), a process defined in terms of (1) . Options for deciding on an upper critical point in the null distribution of R n, j include the following: (i) One may simulate from the limit distribution, at the estimated versions of K j , Q and α * j (s, θ). This can be done with relative ease by simulating W processes, via independent normal increments. (ii) One may simulate from the R n, j distribution, again at its estimated position with respect to K j , Q, and α * j , by simulating full N * i and Y * i processes from the model where the ith life-time comes from the distribution with integrated hazard rate . This amounts to semiparametric bootstrapping at the estimated model.
Note that the above methods also apply to the simultaneous test statistic k j=1 R n, j , and relatives thereof.

Simulations and an application
In this section we compare the fully nonparametric linear hazard regression model, that is, the Aalen model, with the partly parametric partly nonparametric linear hazard regression model developed in this paper. First, in Sect. 6.1, this comparison takes place on simulated data; while Sect. 6.2 contains an analysis of n = 312 Primary biliary cirrhosis patients that participated in a double-blind randomised study at the Mayo Clinic in the USA between January 1974 and May 1984. This dataset is contained in the R package survival (Therneau and Lumley 2013).

Simulations
We simulated 200 datasets of n = 2000 potentially right-censored survival times, with the covariates held fixed across the 200 simulations (reflecting that the large-sample theory of this paper is developed conditionally on the covariates, see Assumptions 1). The true hazard rate of the ith individual was taken to be h i (t) = θ 1 θ 2 t θ 2 −1 z i,1 + θ 3 tz i,2 + 0.572 t 2−1 + 0.123 tz i,4 , with θ 1 = 0.123, θ 2 = 2, and θ 3 = 0.567. The censoring times were drawn from the uniform distribution on [0, 1], resulting in about 55 percent of the survival times being observed. To each data set we fit the Aalen linear hazard regression model with four regressors, and also a correctly specified partly parametric partly nonparametric model, that is, the model with hazard rate meaning that α 3 (t) is the 'intercept' function. Figure 1 displays histograms of the z-values (or Wald statistics) √ n( θ k − θ k )/se( θ k ) for k = 1, 2, 3, and √ n{ A j (t) − A(t)}/se( A j (t)) for j = 3, 4, the latter evaluated at time t = 0.5. The standard errors  se( θ k ) and se( A j (t)) used to compute these statistics are estimates of the true standard deviations of the estimators. The histograms indicate the with p = q = 2 and m = 3, a rather large sample size is needed for the normality to really kick in for all estimands.

Empirical application
Primary biliary cirrhosis (PBC) is a rare but serious liver disease of unknown origin. Between January 1974 and May 1984, 312 PBC-patients were included in a doubleblind randomized study at the Mayo Clinic in the USA, comparing D-penicillamine with placebo. In our analysis, we have chosen to model the hazard rate of the ith patient as where treat i is an indicator taking the value zero if placebo, and one if Dpenicillamine; and alb i is the concentration of serum albumin (in g/dl) of the ith patient. The covariate alb i was centred around its mean, and standardised by its standard deviation. We estimated the cumulatives A j (t) = t 0 α j (s) ds, both by using the Aalen estimator A(t) of (2.2); and by parametrising the regression functions α 1 (t) and α 2 (t) as α 1 (t) = α 1 (t, θ) = θ 1 θ 2 t θ 2 −1 , and α 2 (t) = α 2 (t, θ) = θ 3 t, and using the estimation methods developed in this paper.
The estimated cumulative regression functions, along with pointwise approximate 95 percent confidence bands, are plotted in Fig. 2. For the parametric cumulative regressors the confidence bands were obtained by an application of the delta method, and using Proposition 4.1. From the two plots in Fig. 2, it is not easy to see that the confidence bands for the estimators in the partly parametric partly nonparametric model are more narrow than those of the Aalen model. In Fig. 3, therefore, we have plotted the estimated pointwise standard deviations for all six estimators of the cumulative regression functions, clearly showing the gains in efficiency.
In order to make a stab at assessing the goodness of fit of the parametric functions, Fig. 5 displays the R n,1 (t) and R n,2 (t) functions of (5.2), as developed in Sect. 5. In particular, the blue line shows √ n( A 1 (t) − θ 1 t θ 2 ), while the green line shows √ n( A 2 (t) − θ 3 t). We see that the parametric regressors seem to give a decent fit for the first eight years in the data, while for the remaining years the Aalen estimators and the parametric estimates diverge somewhat. One should keep in mind, however, that with n = 312, the amount of data we have for these later years is rather limited, which means increasing variance for A 1 (t) and A 2 (t). A formal test for the adequacy of the parametric hazard functions may be carried out using the apparatus of Sect. 5.2. Figure 4 displays the estimated survival curves of an individual, corresponding to the Aalen, and the partly parametric partly nonparametric linear hazard regression model, respectively, along with pointwise approximate 95 percent confidence bands (see Sect. B.3). The two survival curves in Fig. 4 follow each other closely, but the confidence band for the partly parametric partly nonparametric model is always tighter than that corresponding to the Aalen estimator.

Concluding remarks
We end our article with a list of concluding remarks, some pointing to further research. A. Link to GAM. We have investigated parametric-nonparametric models for the Aalen hazard function model r j=1 z j α j (s). There are similarities to the generalised additive regression models, where the mean response curve for covariates x 1 , . . . , x r is modelled as E (Y | x 1 , . . . , x r ) = f 1 (x 1 ) + · · · + f r (x r ). The typical GAM machinery takes the regression functions f j (x j ) functions to be nonparametric, where there are several estimation methods; see Hastie and Tibshirani (1990); Wood (2017). Methods of the present paper may inspire parametric-nonparametric versions of GAM, with some of the f j (x j ) modelled parametrically.
B. Local power of goodness-of-fit tests. The monitoring functions of Sect. 5, i.e. the R n (t) and R n, j (t), lead as explained there to classes of goodness-of-fit tests, including chi-squared and Kolmogorov-Smirnov type versions. One may also investigate the local power of such tests, by extending Proposition 5.1 to the situation where the true α j (s) functions are O(1/ √ n) away from parametric α j (s, θ 0 ). Such results may then be used further for constructing weight functions K n (s) with optimal local power against certain envisaged alternatives.
C. Large-sample behaviour outside model conditions. In Sect. 4 clear limiting normality results have been derived under model assumptions. These may be extended to situations where the real underlying hazard function structure takes the general Aalen form r j=1 z j α j (s), with the first p of the α j (s) not necessarily being inside the parametric models, say α j (s, θ j ). This involves certain least false parameters θ 0, j . The benefit of having such more general outside-model results is partly to construct model robust methods for confidence intervals, etc., and also for building appropriate model selection strategies.

D. FIC for model selection.
Our parametric-nonparametric model machinery has been developed for a given set of parametric model components, say α j (s, θ j ) for components j = 1, . . . , p. It would clearly be useful to develop supplementing model selection methodology, for situations where the statistician is not able or willing to decide a priori which components to take parametric, and in that case which parametric structures to use. Methods of the AIC and BIC variety cannot be used, since there are no likelihood functions. One may however develop FIC methods, for the Focused Information Criterion; see Claeskens and Hjort (2008, Ch. 6-7) for a general discussion. FIC methods along the lines developed in Jullum and Hjort (2019), Claeskens et al. (2019) can be constructed in the present setup. The start assumption is that the nonparametric Aalen model holds, for certain unknown α j (t) for j = 1, . . . , r . For a given quantity of interest, say μ = μ(α 1 (·), . . . , α r (·)), there would be a list of ensuing estimators, say μ M for candidate model M. The FIC would then be an estimator of the mean squared error for these μ M . Carrying out this would need large-sample normality results outside parametric model conditions, as briefly pointed to in point D above.
E. Alternative estimation strategies. Our estimators for the parametricnonparametric model use for Step (a) minimisation of a certain criterion function C n (θ ) of (3.1), with the resulting θ also being used in Step (b) for the nonparametric components. Other strategies may also be used for Step (a), including minimising other criterion functions for making α (1) (s, θ) come close to the underlying α (1) (s). Special versions of such ideas lead to M-type estimators, for which theory is given in Hjort (1985, Section 4). Extending the full theory to estimation of both θ and A (2) (t) takes further efforts, however. F. A parametric-nonparametric cure model. In recent years, cure models have gained much attention. See Amico and Van Keilegom (2018) for a review, and the references therein. These are models for survival times where an unknown fraction of the population under study is 'cured', in the sense that the individuals belonging to this fraction will never experience the event of interest. The population survival curve for the (standard) cure model takes the form S pop. (t) = 1 − π + π S(t), where S(t) is a proper survival function (that is, S(t) → 0 as t → ∞), and π is the probability of being susceptible to the event of interest. Both S(t) and π are typically modelled as functions of covariates, S(t) = S(t | z) and π = π(x t γ ), where z and x are potentially different sets of covariates. In Stoltenberg (2020) a cure model with a linear hazard regression model à la Aalen is introduced, as in 1.3 the (proper) survival function takes the form S(t | z) = exp{−z t A(t)}, and estimation methods for the A j (t) as well as the parameters entering π(x t γ ) are developed. Inspired by the development of the present paper, estimation methods and accompanying large-sample theory could be developed for the partly parametric partly nonparametric cure model, that is, a model whose population survival function is with π(a) : IR→ [0, 1] some parametric function, for example the logistic one. The unknowns of this model, that need to be estimated from the data, are the parameter vectors γ and θ , as well as the nonparametric cumulatives A j (t) = t 0 α j (s) ds for j = p + 1, . . . , p + q. Estimators for these may be obtained by combining the estimators developed in Stoltenberg (2020) with the two-step estimation procedure of the present paper.
Acknowledgements Our work with this article has benefitted from discussions with Ian McKeague and Ingrid Van Keilegom, and we are grateful for constructive comments from two referees.
Funding Open access funding provided by University of Oslo (incl Oslo University Hospital).
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

A Asymptotic optimality
Consider the family of models whose hazard rates are We assume that α j (s, θ j ) for j = 1, . . . , p, so that r ≥ p. If θ ∈IR r say, then (A.1) is a r + q K dimensional model. This is now a fully parametric model, so we can use theory from Sect. 2.3. Until we say otherwise, we are going to assume that the true model is of the form (A.1), with K held fixed. The log-likelihood function is We split the score function in a θ -part and an η-part. We have which is an r × 1 column vector (with r being the dimension of θ ), and where α * is a p × r matrix containing the partial derivatives ∂α j (s, θ j )/∂θ j for j = 1, . . . , p.
We also have the score function V n = (V t n,1 , . . . , V t n,K ) t , which is a q K × 1 column vector, where are q × 1 column vectors. Let J n be the variance process of (U n , V n ), and let ( θ, η) be the maximum likelihood estimator. Under the conditions of Proposition 2.1, we know that √ n( θ − θ 0 , η − η 0,K ) converges in distribution to N r +q K (0, −1 K ) as n → ∞, where K is the limit in probability of J n , and θ 0 , η 0,K denote the true values of the parameters (under the K 'th model). We need to find the limiting distribution of θ and the estimator for the cumulative A 2 (t, η) = t 0 α 2 (s, η) ds. Introduce the ( p+q)×( p+q K ) matrix function H t : IR p+q K →IR p+q that is such that H t (θ t , η t ) t = (θ 1 , . . . , θ r , A p+1 (t, η), . . . , A p+q (t, η)) t for each t. (The function H t also depends on K , but we suppress this from the notation as it is of little relevance in the following.) An application of the delta-method now yields By doing the matrix algebra, we see that the matrices inside the parentheses are the r × r matrix and K ,21 = t K ,12 , while K ,22 is the q K ×q K block diagonal matrix whose blocks are the q × q matrices ( K ,22 ) l = F 22 I W l , for l = 1, . . . , K .
Here, F 11 , F 12 = F 21 and F 22 are the probability limits of F n,11 , F n,12 = F n,21 and F n,22 , respectively, where these latter are the blocks of the matrix F n defined in (2.6). We now consider K as K → ∞, that is, as the interval lengths shrink to zero. Under appropriate conditions on the covariates, on the probability limit of n −1 n i=1 Y i (s), and on the function s → α * (1) (e.g. bounded derivatives), we have that the l'th diagonal block of K ,22 is and, similarly, We then get which is a Riemann sum converging to In conclusion, J n → p K as n → ∞ with K fixed, and as K → ∞. The limit on the right, say −1 0,11 , is the expression of (4.3). Suppose that there is a consistent estimator for θ 0 with smaller variance than −1 0,11 under the partly parametric partly nonparametric model in (1.4). Denote its variance matrix by V , so that V < −1 0,11 (meaning that V − −1 0,11 is a negative definite matrix). Since −1 K ,11 ≤ −1 K +1,11 for all K and −1 K ,11 → −1 0,11 as K → ∞, this means that there is a K 0 such that V < −1 K ,11 for all K ≥ K 0 . But −1 K ,11 is the Cramér-Rao lower bound for estimating θ 0 under the K 'th parametric model of the form (A.1), so V < −1 K ,11 cannot happen, and consequently there cannot be a consistent estimator for θ 0 with smaller variance than −1 0,11 under the model in (1.4).

B Efficiency and relative improvement calculations
There are general benefits from building and using parametric components models rather than nonparametric ones, provided the models can be assessed to check for adequacy, a theme addressed in the following section. In this section we consider questions related to efficiency; how much is gained, in precision, by using the parametric-nonparametric model (1.4), compared to the nonparametric Aalen methods?

B.1 Asymptotic relative efficiencies
We have seen in previous sections that various limit distributions depend crucially on the limit matrix functions F, G, H of F n , G n , H n , defined in Sect. 2, along with certain relatives. These functions will now be studied and compared for a certain setup, to illustrate also aspects of relative efficiency. Assume that the censoring mechanism works independently of the life-times, with ρ(s) = P{C i ≥ s} for its survival function. Then E{Y i (s) | z i } = exp{−z t i A(s)}ρ(s). With Y (s) and Z denoting generic at-risk indicator and covariate vector, distributed according the covariate distribution in question, we deduce cf. Assumptions 1. Assume now that the rates α j are constant. Then with derivatives F 0, j,k (s) = −G 0, j,k (s), and the next derivative gives dH 0, j,k (s). Suppose further that the covariates Z 1 , . . . , Z r are independent with Laplace transforms L j (u) = E exp(−u Z j ) = exp{−uψ j (u)}. Then L j (u) = −E Z j exp(−u Z j ) = −L j (u)ψ j (u), which leads to where δ j,k equals 1 when j = k, and zero otherwise. These functions may now be studied and integrated numerically to give F functions, for different scenarios. We shall be content to illustrate this here for the case where the Z j s have gamma distributions.
This ratio can now be studied, as a curve in t, for different sets of parameters. In certain setups the precision of the parametric estimator can be significantly better than the nonparametric one.

B.2 Efficiency improvements for the plain-weights estimators
In the previous subsection we have cared for the particular case of theoretically optimal estimators, for θ and A j+1 , . . . , A p+q . These involve the use of certain cumbersome optimal weights w i (s) = 1/z t i α(s), an accompanying optimal V n (s) matrix when minimising the criterion function C n (θ ) of (3.1), and so on. This has led to relatively clear and transpararent formulae for relevant relative efficiency ratios.
In practice we would be more interested in similar efficiency ratios for our favoured default choice V n (s) = n −1 n i=1 Y i (s)z i z t i , however. Propositions 4.1 and 4.2 may be used to find limit variance expressions in this case too, involving and yet other quantities; again expectation is with respect to the ergodic distribution of covariates, see Assumptions 1. These expressions can be evaluated numerically, along with other required quantities, for given setups of covariance distributions and A j (t) functions. The efficiency ratios do not have clear formulae, however, so comparisons are harder and less transparent compared to those in the previous subsection. We have carried out some numerical computations, in simple cases, and found ratios broadly similar to those reached above.
Crucially, one may also use variance formulae developed in Sect. 4 to compute their standard errors, i.e. estimated standard deviations. For example, a focus parameter of particular interest in survival analysis is the survival function evaluated in some fixed time point t. For an individual with covariates z 0 = (z t 0,(1) , z t 0,(2) ) t , so that the focus parameter is μ = exp{−z t 0 A(t)}, we have √ n( μ Partly − μ) → d N(0, μ 2 z t 0 (t)z 0 ), where (t) is the matrix appearing in (4.5).
A direct comparison of the standard errors of μ Aalen and μ Partly for a given focus parameter allows one to see if there is a clear gain in going from nonparametric to parametric for the α j (s) components in question. This is illustrated in Fig. 3, with proof-of-the-pudding plots of the standard errors for the two estimators of the cumulatives A 1 (t), A 2 (t), A 3 (t), and also exemplified by the survival curve plot of Fig. 4 where the confidence band of the partly parametric survival curve is visibly more narrow than the confidence band corresponding to the Aalen estimator.