Maximum Likelihood With a Time Varying Parameter

We consider the problem of tracking an unknown time varying parameter that characterizes the probabilistic evolution of a sequence of independent observations. To this aim, we propose a stochastic gradient descent-based recursive scheme in which the log-likelihood of the observations acts as time varying gain function. We prove convergence in mean-square error in a suitable neighbourhood of the unknown time varying parameter and illustrate the details of our findings in the case where data are generated from distributions belonging to the exponential family.


Introduction
When estimating unknown parameters in a dynamic model the optimum solution to the parameter estimation problem may not remain constant.Specifically, the optimal values of the model parameters may change through time because of the evolution of the underlying process: finding them is, in general, not straightforward.A survey of basic techniques for tracking the time-varying dynamics of a system is provided in [Ljung and Gunnarsson, 1990] where recursive algorithms in non-stationary stochastic optimization are analysed under different assumptions about the true system's variations, see also [Simonetto et al., 2020] for a review in a purely deterministic setting.In [Delyon and Juditsky, 1995] the problem of tracking the random drifting parameters of a linear regression system is tackled, and [Zhu and Spall, 2016] builds a computable tracking error bound for how a stochastic approximation with constant gain keeps up with a non-stationary target.Successively, [Wilson et al., 2019] introduces a framework for sequentially solving convex stochastic minimization problems, where the distance between successive minimizers is bounded.The minimization problems are then solved by sequentially applying an optimization algorithm, such as stochastic gradient descent (SGD).In a similar setting, [Cao et al., 2019] establishes an upper bound on the regret of a projected SGD algorithm with respect to the drift of the dynamic optima, while [Cutler et al., 2021] provides novel non-asymptotic convergence guarantees for stochastic algorithms with iterate averaging.We study time-varying stochastic optimization in a general statistical setting where we assume we are given a sequence of independent observations {X t } t∈N with associated densities possessing a parameter that changes through time.In such a framework a problem of interest concerns finding a useful estimator of the time varying parameter at a certain time t -generalizing the classical problem of parameter estimation from the static setting to the time varying parameter setting.Ideally, one would like to find a sequence of estimators that track the time varying parameter through time as closely as possible.We show that, under some assumptions, utilizing the celebrated SGD algorithm [Robbins and Monro, 1951] produces a sequence of estimators that will eventually track the time varying parameter -up to a neighborhood -as the number of observations increase.Established in a general setting that intersects with the frameworks utilized in [Cao et al., 2019], [Cutler et al., 2021] and [Wilson et al., 2019], our results differ from previous work mainly in one aspect: that our objective functions have the specific form of expected log likelihoods, a dissimilarity that will be exploited by utilizing their informational theoretical properties.The work we present is also linked to the class of score driven models [Creal et al., 2013].Score driven models are a class of observation driven models (here we are using the terminology introduced by [Cox et al., 1981]) that update the dynamics of the time varying parameter through the score of the conditional distribution of the observations.Specifically, the same proof technique we utilize to obtain our result can be used to show that a -so called-Newton-score update [Blasques et al., 2015], with the parameter that multiplies the score appropriately chosen, will track the time varying parameter of interest trough time even under possible model misspecificaiton.A final way to interpret the results we present in this work is as robustness results for a one batch stochastic gradient procedure in the case we are incorrectly assuming that our observations are identically distributed.Indeed, the results show that even if we incorrectly assumed that the true parameter is static (we have IID observations) utilizing a stochastic gradient algorithm with a time dependent single sized batch to optimize the log-likelihood allows us to track the pseudo true time varying parameter up to a neighborhood if it is not moving wildly.The paper is organised as follows: in Section 2 we list and discuss the assumptions of our framework and state the main result.We then present a class of examples given by the exponential family and discuss the performance of SGD with respect to the one observation maximum likelihood estimator at each time.In the third section we provide a detailed proof of our main result.

Statement of the main result
Let {X t } t∈N be a sequence of independent m-dimensional random vectors defined on a common probability space (Ω, F, P).In the sequel we will write E[•] for the expected value with respect to the probability measure P, • for the Euclidean norm in R d and We assume that for any t ∈ N the random vector X t possesses a joint probability density function which depends on the d-dimensional parameter λ * t , in symbols X t ∼ p(•|λ * t ).Our aim is to estimate the sequence {λ * t } t∈N through the observed values {X t } t∈N : To this aim we choose λ 1 ∈ R d and utilize the SGD algorithm (2.1) Utilizing SGD to attempt to track λ * t is motivated by the principle underlying classical maximum likelihood estimation: in fact, under some canonical assumptions we will present below, λ * t will be the maximum of the expected log-likelihood λ → E [ln p(X t |λ)].Thus, finding a sequence of estimators that track the time varying parameter as closely as possible is connected to finding the maxima of a sequence of expected log-likelihoods, a generalization of the classical static framework.Since we have no direct access to the expected log-likelihoods, but only a singe observation for each time t, we categorize the problem as a time varying stochastic optimization problem.
The assumptions we will require to obtain our result are the following.
Assumption 2.1 (Smoothness of the log-likelihood).The function is twice continuously differentiable for all x ∈ R m ; moreover, for all i, j ∈ {1, ..., d} and t ∈ N.
is globally Lipschitz continuous uniformly with respect to x ∈ R m : i.e., there exists a positive constant L such that for all x ∈ R m we have Assumptions 2.2 and 2.3 are classical in the optimization literature, see for instance [Boyd and Vandenberghe, 2004] and [Bottou et al., 2018]; we have utilized the versions of [Nesterov, 2014].We remark that Assumption 2.2 may seem excessively restrictive at first glance, but we will present in Example 2.9 below a large family of examples where it holds.
Remark 2.4.Assumptions 2.1 and 2.3 imply that where we have denoted i.e. the trace of Fisher information matrix of X t .In fact, We will use Remark 2.4 to bound the quantity E[ ∇ λ ln p(X t |λ t ) 2 ].In the general setting utilized in the optimization literature a bound on E[ ∇ λ ln p(X t |λ t ) 2 ] requires an extra assumption, see [Bottou et al., 2018] and the discussion in [Nguyen et al., 2018].In our setting we manage to avoid this type of additional assumption thanks to the properties of the Fisher information matrix.Our last assumption concerns the evolution of the time varying parameter {λ * t } t∈N .Assumption 2.5 (Lipschitz continuity of the true parameter).There exists a positive constant K such that Assumption 2.5 has been used throughout the literature, see for example [Simonetto et al., 2020], [Cao et al., 2019] and [Wilson et al., 2019], since a limitation on the behavior of the sequence of true parameters values must be imposed to be able to track it.
We can now state our main theorem.
Theorem 2.6.Let Assumptions 2.1, 2.2, 2.3 and 2.5 hold.Then, for α ∈ where ϕ(α, L) := √ 1 − 2Lα + 2L 2 α 2 .Moreover, the minimum of the right hand side in (2.3) is attained at α = 1 +L and in this case the last inequality reads lim sup (2.4) Remark 2.7.Notice that λ t+1 depends on X 1 , X 2 , . . ., X t , so as an estimator it is natural to compare it with λ * t .Remark 2.8.In the case of model misspecification, i.e. when the true distribution of the observations is not included in the parametric model {p(•|λ)} λ∈R d , the same proof technique can be utilized to show that the recursion (2.1) will track the so called pseudo-true time varying parameter λt which is defined as We recall that the pseudo-true time varying parameter λt minimizes the Kullback Leiber divergence between the law of the data generating process and the model densities at each time t, see [White, 1982] and [Akaike, 1973] for additional details.The only technical difference in the proof is that Remark 2.4 can't be used since E[ ∇ λ ln p(X t | λt ) 2 ] is no longer related to the Fisher information matrix of X t .Thus, an additional assumption is needed to control E[ ∇ λ ln p(X t | λt ) 2 ] but this is standard practice in the optimization literature, see [Nguyen et al., 2018] for a discussion on this kind of assumption.
Example 2.9.The exponential family in canonical form provides a class of natural examples where Theorem 2.6 holds.Take as the parameter of interest the natural parameter of a distribution belonging to the exponential family put in canonical form, i.e.
where h : R m → R is a non-negative function, T : R m → R d is a sufficient statistic and A : R d → R must be chosen so that p(x|λ) integrates to one.A standard result for exponential families, see for instance Theorem 1. 6.3 in [Bickel and Doksum, 2001], is that A is a convex function of λ; this fact together with identities implies that one can find, restricting if necessary the range of λ (and hence of {λ * t } t∈N ) to a suitable convex compact set Λ, the positive constants l and L required for the validity of Assumptions 2.2-2.3.Note that the restriction of the range of λ to the convex compact set Λ is carried out by simply modifying (2.1) as where Π Λ denotes the orthogonal projection onto the set Λ.This alternative scheme doesn't affect the validity of Theorem 2.6; in fact, from the contraction property of Π Λ we get and this corresponds to the first step in the proof of Theorem 2.6 (see Section 3 below for more details).
An important question concerning applied settings is whether the estimator λ t defined in (2.1) performs asymptotically better than the maximum likelihood estimator λt calculated by optimizing the one observation log-likelihood ln p(X t |λ t ).The following example will showcase that there are indeed cases when utilizing (2.1) is beneficial.
Example 2.10.Referring to Example 2.9 and setting m = d = 1 for easiness of notation, we consider a sequence of independent observations {X t } t∈N with We assume in addition that λ → A (λ) is continuous and we restrict the parameter space to Λ = [λ m , λ M ] for suitable real numbers λ m < λ M .Observe that Assumptions 2.2 and 2.3 hold in this case with In Theorem 2.6 we obtained an upper bound for the asymptotic mean-square error of λ t as defined in (2.1).We now want to compare it with the mean-square error of the sufficient statistic T (X t ), which we assume to be unbiased; this means considering the quantity where the last equality follows from Theorem 1.6.2 in [Bickel and Doksum, 2001].Therefore, our estimator λ t , performs asymptotically better than T (X t ) if (2.6) Here, the left hand side corresponds to right hand side in (2.4) with d = 1 while the right hand side follows from (2.5).We want this inequality to hold for all possible values of the sequence {λ * t } t∈N and this is achieved by taking the infimum of the right hand side of (3.5), i.e., we want (2.7) A simple investigation of the previous inequality shows that the left hand side increases for small values of or large values of L; hence, there exist ¯ and L such that for all ¯ ≤ ≤ L ≤ L the asymptotic mean-square error of λ t is lower than the mean-square error of the sufficient statistic T (X t ).Figures (1) and (2) provide an illustration of this fact.Finally, notice that there are cases when the sufficient statistic of the exponential family is unbiased and coincides with the one observation maximum likelihood estimator, as is the case if we choose as the parameter of interest the variance of a Gaussian.
3 Proof of the main result Using (2.1) and expanding the squared Euclidian norm we can write where we set and To treat A 1 we employ Theorem 2.1.12from [Nesterov, 2014]; with C 1 := L +L and C 2 = 1 +L this gives  (3.4)Notice that according to the definitions of C 1 and C 2 we can write

Figure 2 :
Figure 2: Plot of the surface z = min K √ 2 +L 2 + √ 2L +L− √ Assumption 2.2 (Strong convexity).The function in (2.2) is strongly convex uniformly with respect to x ∈ R m : i.e., there exists a positive constant such that for all x ∈ R m the matrix H λ [− ln p(x|λ)] − I d is positive semi-definite.Here, H λ [− ln p(x|λ)] stands for the Hessian matrix of the function in (2.2) while I d denotes the d × d identity matrix.