1 Introduction

Spherical data are observations that lie on the unit sphere \({\mathbb {S}}^{p-1} = \left\{ {\mathbf {y}} \in {\mathbb {R}}^p: {\mathbf {y}}^\top {\mathbf {y}} = 1 \right\} \). They arise in many scientific disciplines, including shape analysis, geology and meteorology [e.g. Mardia and Jupp (2000)] and more recently areas as diverse as genome sequence representations and text analysis [e.g. Hamsici and Martinez (2007)]. In this paper, we consider the regression problem in which the data are pairs \(\{\mathbf{x}_i, \mathbf{y}_i \}\), \(i=1, \ldots , n\), involving a \(q \times 1\) covariate vector, \(\mathbf{x}_i\), and a spherical response variable, \(\mathbf{y}_i \in {\mathbb {S}}^{2}\). The aim of regression modelling is to establish how the response variable \({\mathbf {y}}_i\) depends on \({\mathbf {x}}_i\).

Typical parametric regression models currently in use for spherical responses in dimension \(p \ge 3\) are fairly restrictive in the sense that (i) the covariates are assumed to have special structure, e.g. that the covariate is a scalar (such as time) or is itself on the sphere (i.e. a direction); and/or (ii) the models assume isotropic error distributions. Examples of (i) and (ii) in the literature are Chang (1986), Rivest (1989) and Rosenthal et al. (2014), see also Di Marzio et al. (2014) in a nonparametric context. Recent work in regression modelling on general Riemannian manifolds, for which the unit sphere is a special case, includes the nonparametric approach of Lin et al. (2017), who develop local regression models assuming Euclidean covariates, and the semi-parametric approach of Cornea et al. (2017), who use parametric link functions mapping from a general covariate space to the manifold, with a nonparametric error distribution; though in neither is the possibility of anisotropic errors explicitly considered.

The principal contribution of this paper is to introduce parametric regression models for spherical response data that relax both (i) and (ii). The motivation for doing so is that in many applications the covariates do not have the simple structure described in (i), and that there is rarely any basis for assuming a priori that the error distribution is isotropic.

There are two main ingredients of the spherical regression models we develop: a distribution on the sphere, to play the role of an error distribution, and a structural model linking the parameters of this error distribution to the covariates. Our approach is similar in spirit to generalised linear models in the sense that we express parameters of the distribution of \({\mathbf {y}}_i\) in terms of \({\mathbf {B}} {\mathbf {x}}_i\), where \({\mathbf {B}}\) is a matrix of parameters. Two simple distributions on the sphere, each broadly analogous to the isotropic normal distribution in \({\mathbb {R}}^2\), are the von Mises–Fisher and isotropic angular Gaussian (IAG) distributions. Both are “isotropic” (or equivalently “rotationally symmetric”) on the sphere at the mean direction, \({\tilde{{\varvec{\mu }}}} \in {\mathbb {S}}^2\), meaning that their contours are small circles centred on \({\tilde{{\varvec{\mu }}}}\). The von Mises–Fisher distribution arises from conditioning an isotropic multivariate normal random variable \({\mathbf {z}} \in {\mathbb {R}}^p\) to have unit norm. On \({\mathbb {S}}^2\), to which we specialise henceforth, it is often called the Fisher distribution. It has three free parameters: two to define the mean direction, \({\tilde{{\varvec{\mu }}}} \in {\mathbb {S}}^2\), and another scalar parameter, \(\kappa >0\), that controls concentration. Its density function on \({\mathbb {S}}^2\) is

$$\begin{aligned} f_\text {Fisher}( \mathbf{y} \vert \kappa , {\tilde{{\varvec{\mu }}}} ) = \frac{\kappa }{4 \pi \sinh (\kappa )} \exp \left( \kappa \mathbf{y}^\top {\tilde{{\varvec{\mu }}}} \right) . \end{aligned}$$
(1)

The IAG distribution arises from projecting (as opposed to conditioning) \({\mathbf {z}}\) to lie on \({\mathbb {S}}^{p-1}\). On \({\mathbb {S}}^2\) its density function is

$$\begin{aligned} f_\text {IAG}( \mathbf{y} \vert {\varvec{\mu }}) = \frac{1}{2 \pi } \exp \left[ \frac{1}{2}\left\{ {\left( \mathbf{y}^\top {\pmb \mu }\right) ^2} -{\pmb \mu }^\top {\pmb \mu } \right\} \right] M \! \left( {\mathbf{y}^\top {\pmb \mu }} \right) , \nonumber \\ \end{aligned}$$
(2)

where \(M(\alpha ) =\alpha \phi (\alpha ) + (1+\alpha ^2)\Phi (\alpha )\), and where \(\phi (\cdot )\) and \(\Phi (\cdot )\) are the standard normal probability density function and cumulative distribution function, respectively. It is parametrised by the vector \({\varvec{\mu }}\in {\mathbb {R}}^3\). In terms of \({\varvec{\mu }}\), the mean direction is \({\varvec{\mu }}/\Vert {\varvec{\mu }}\Vert \) and the concentration is determined by \(\Vert {\varvec{\mu }}\Vert \). Note that (2) could equally be re-parametrised in terms of \({\tilde{{\varvec{\mu }}}} = {\varvec{\mu }}/\Vert {\varvec{\mu }}\Vert \) and \(\kappa = \Vert {\varvec{\mu }}\Vert \), analogous to the parametrisation of (1), and likewise (1) could be re-parametrised in terms of a parameter \(\kappa {\tilde{{\varvec{\mu }}}} \in {\mathbb {R}}^3\); the distinction between parametrisations is a matter of modelling convenience and in the following we shall make use of both.

Because they are isotropic, the 3-parameter Fisher and IAG distributions are too restrictive for many applications. Each, however, has a 5-parameter anisotropic generalisation: the Kent (1982) distribution, and the elliptically symmetric angular Gaussian (ESAG) distribution (Paine et al. 2017), respectively. Both the Kent and ESAG distributions have elliptical symmetry about the mean direction, that is, they have ellipse-like contours centred on the mean direction. The two extra parameters over their isotropic counterparts control the orientation and eccentricity of the elliptical contours. We describe the Kent and ESAG distributions in more detail in Sect. 2, but here introduce two parametrisations we shall use for each. The first parametrisation we shall consider is in terms of \((\kappa ,\beta ,{\varvec{\Gamma }})\), in which \(\kappa >0\) is a concentration parameter, \(\beta \ge 0\) is an eccentricity parameter, and \({\varvec{\Gamma }}= ({\tilde{{\varvec{\mu }}}} \,\,\, {\varvec{\xi }}_1 \,\, {\varvec{\xi }}_2)\in O(3)\) is an orthogonal matrix (i.e. \({\varvec{\Gamma }}^\top {\varvec{\Gamma }}= {\mathbf {I}}\), where \({\mathbf {I}}\) is the identity matrix), in which \({\tilde{{\varvec{\mu }}}}\) is the mean direction (having 2 degrees of freedom) and \(({\varvec{\xi }}_1, {\varvec{\xi }}_2)\) are the major and minor axes that identify the orientation of the elliptical contours (together having 1 remaining degree of freedom). This parametrisation generalises that of (1).

The second parametrisation we consider, generalising (2), is in terms of a pair of vectors, \({\varvec{\mu }}\in {\mathbb {R}}^3\) and \({\varvec{\gamma }}\in {\mathbb {R}}^2\), in which, as in (2), \({\varvec{\mu }}\) controls the mean direction and concentration; then \({\varvec{\gamma }}\in {\mathbb {R}}^2\) controls eccentricity and orientation of the elliptical contours.

These two parametrisations lend themselves to different ways of modelling how the response variable depends on covariates. We consider models with the following structures

$$\begin{aligned} \text {Structure 1:} \quad&{\mathbf {Q}}^\top {\mathbf {y}}_i \sim \text {H}(\kappa ,\beta ,{\varvec{\Gamma }}({\mathbf {x}}_i)); \end{aligned}$$
(3)
$$\begin{aligned} \text {Structure 2:} \quad&{\mathbf {Q}}^\top {\mathbf {y}}_i \sim \text {H}({\varvec{\mu }}({\mathbf {x}}_i),{\varvec{\gamma }}({\mathbf {x}}_i)); \end{aligned}$$
(4)

where \(\text {H}(\cdot )\) is one of \(\text {Kent}(\cdot )\) or \(\text {ESAG}(\cdot )\) and \({\mathbf {Q}}\) is an orthogonal matrix, discussed later in Sect. 3 and the “Appendix”, which is needed so that inference does not depend in undesirable ways on the particular, and possibly arbitrary, coordinate system in which the \({{\mathbf {y}}_i}\) are defined. A primary difference between (3) and (4) is that in Structure 1 we allow the mean direction and orientation of the dispersion to depend on the covariate vector, but the magnitude of dispersion and anisotropy is fixed. For Structure 2, all of these depend on the covariate. We specify in Sect. 3 particular functional forms for the \({\varvec{\Gamma }}(\cdot )\), \({\varvec{\mu }}(\cdot )\) and \({\varvec{\gamma }}(\cdot )\), but for now note that we will consider four models that result from the different combinations of these two structures and two error distributions. We will call these Kent1, ESAG1, Kent2, and ESAG2, where, for example, ESAG1 means using \(\text {H}(\cdot ) = \text {ESAG}(\cdot )\) as the error distribution and Structure 1 to model dependence on covariates. This modelling approach, in which we assume that the parameters of the error distribution depend on the covariates in particular functional ways, closely parallels generalised linear modelling, although rather than having a single linear predictor, here we have several.

Before giving more details about the parametrisations and models, we briefly discuss some earlier papers on spherical regression. Rivest (1989) considered the case with covariates themselves on the sphere, \({\mathbf {x}}_i \in {\mathbb {S}}^2\), and a Fisher error distribution with the mean direction modelled as \({\tilde{{\varvec{\mu }}}}({\mathbf {x}}_i) = {\mathbf {R}} {\mathbf {x}}_i\), where \({\mathbf {R}}\in \text {SO}(3)\) is a rotation. Rosenthal et al. (2014) replaced the rotation with the “projective linear transformation” (PLT), \({\tilde{{\varvec{\mu }}}}({\mathbf {x}}_i) = {\mathbf {A}} {\mathbf {x}}_i /\Vert {\mathbf {A}} {\mathbf {x}}_i \Vert \), with \({\mathbf {A}} \in \text {SL}(3)\) where \( \text {SL}(3) = \{ \mathbf{A} \in {\mathbb {R}}^{3\times 3}: \det (\mathbf{A}) = 1 \}\) is the special linear group. This is a generalisation of Rivest’s model since \(\text {SL}(3)\) contains \(\text {SO}(3)\). We consider the PLT later, using it to benchmark performance of the new models we introduce.

Besides regression models on the unit sphere, \({\mathbb {S}}^2\), there are several models for regression on the unit circle, \({\mathbb {S}}^1\). Presnell et al. (1998) considers regression on \({\mathbb {S}}^1\) for a general covariate \({\mathbf {x}}_i\), assuming IAG errors. We mention this model in particular because it is a close analogue on \({\mathbb {S}}^1\) of our ESAG2 model on \({\mathbb {S}}^2\) in the isotropic case (which corresponds to \({\varvec{\gamma }}= 0\)), as discussed later. Related work includes the \({\mathbb {S}}^1\) regression model of Fisher and Lee (1992), but this is less relevant to the present paper because it does not generalise conveniently to \({\mathbb {S}}^2\) or higher dimensional spheres; see Mardia and Jupp (2000) for a discussion of this and of the wider context of regression on \({\mathbb {S}}^1\). We also mention a regression model for data on the simplex introduced by Scealy and Welsh (2011). Their approach is to use a “square-root transformation” to map the data from the simplex to the positive orthant of the sphere, then to develop regression models for the transformed data using the Kent distribution. On the sphere, as opposed to the simplex, however, we believe it is especially important to allow what Scealy and Welsh (2011) refer to as \(\mathbf{K}^*\) to depend on regression variables, something that they do not consider due to the fact they focus on transformed compositional data; see the discussion in the concluding section of their paper.

The main goals of this paper are: to explore and compare the modelling Structures 1 and 2; to investigate in the regression context the advantages and disadvantages of the Kent and ESAG distributions as error distributions; and to develop hypothesis tests for the significance of particular covariates, and of anisotropy.

In the following section, we introduce the Kent and ESAG distributions in each of the two parametrisations, then in Sect. 3 we develop the two modelling structures and hypothesis testing procedures. In Sect. 4, we introduce some novel residuals for model fitting diagnostics; then in Sect. 5, we implement the models and methods on various examples involving both synthetic and real data. Code for fitting the models in this paper is available on the second author’s web page.

2 Elliptically symmetric distributions on \({\mathbb {S}}^2\)

Here, we give details of the \(({\varvec{\mu }}, {\varvec{\gamma }})\) and \((\kappa , \beta , {\varvec{\Gamma }})\) parametrisations of the Kent and ESAG distributions.

2.1 Kent distribution

 Kent (1982) introduced this distribution using a \(\left( \kappa , \beta , {\pmb \Gamma } \right) \) parametrisation, in terms of which the density is

$$\begin{aligned} f_\text {Kent}({\varvec{y}} \vert \kappa ,\beta , {\varvec{\Gamma }})&= C(\kappa , \beta )^{-1} \nonumber \\&\quad \times \exp \left( \kappa {\varvec{y}}^\top {{\tilde{{\varvec{\mu }}}}} + \beta \left( \left( {\varvec{y}}^\top {\pmb \xi _1} \right) ^2 - \left( {\varvec{y}}^\top {\pmb \xi }_2 \right) ^2 \right) \right) , \end{aligned}$$
(5)

where \(C(\kappa , \beta )\) is the normalising constant.

Lemma 1

The Kent density in a \(({\varvec{\mu }},{\varvec{\gamma }})\) parametrisation is

$$\begin{aligned} f_{\text {Kent}}\left( \mathbf{y} \vert {{\varvec{\mu }}}, {{\varvec{\gamma }}} \right)&= C(\kappa ,\beta )^{-1}\nonumber \\&\quad \times \exp \Big ( {{\varvec{\mu }}}^\top \mathbf{y} + \mathbf{y}^\top \Big ( \gamma _1 \Big ( \tilde{{\pmb \xi }}_1 \tilde{{\pmb \xi }}_1^\top {-} \tilde{{\pmb \xi }}_2 \tilde{{\pmb \xi }}_2 ^\top \Big ) \nonumber \\&\quad + \gamma _2 \Big ( \tilde{{\pmb \xi }}_1 \tilde{{\pmb \xi }}_2^\top {+} \tilde{{\pmb \xi }}_2 \tilde{{\pmb \xi }}_1^\top \Big ) \Big ) \mathbf{y} \Big ), \end{aligned}$$
(6)

where \(\kappa = \left|| {{\varvec{\mu }}} \right||\), \(\beta = \sqrt{ \gamma _1^2 + \gamma _2^2 }\) and \( ({\tilde{{\varvec{\xi }}}}_1 \,\, {\tilde{{\varvec{\xi }}}}_2) = ( {{\varvec{\xi }}}_1 \,\, {{\varvec{\xi }}}_2 ) {\mathbf {R}}(\psi )^\top \), with \({\mathbf {R}}(\psi )\) defined as in (14), and where \(\psi \in (0, \pi ]\) is the solution of \(\gamma _1 = \beta \cos 2 \psi \) and \(\gamma _2 = \beta \sin 2 \psi \).

The proof of Lemma 1 is in the “Appendix”.

2.2 Elliptically symmetric angular Gaussian (ESAG) distribution

The general angular Gaussian distribution is the marginal distribution of the directional component of the multivariate normal distribution; that is, for a general mean \({\varvec{\mu }}\in {\mathbb {R}}^p\) and covariance matrix \(\mathbf{V}\), let \({\mathbf {z}} \sim N({\varvec{\mu }}, {\mathbf {V}})\in {\mathbb {R}}^p\), then \({\mathbf {z}}/\Vert {\mathbf {z}}\Vert \) has a general angular Gaussian distribution. The elliptically symmetric angular Gaussian (ESAG) distribution, developed in Paine et al. (2017), is a subfamily of the general angular Gaussian distribution. It is defined by the two conditions

$$\begin{aligned} \mathbf{V}{\pmb \mu }={\pmb \mu }, {\text {det}}(\mathbf{V})=1, \end{aligned}$$
(7)

and on \({\mathbb {S}}^2\) has density

$$\begin{aligned} f_\text {ESAG}(\mathbf{y}| {\varvec{\mu }}, {\mathbf {V}})&= \frac{1}{2 \pi (\mathbf{y}^\top \mathbf{V}^{-1}\mathbf{y})^{3/2}} \nonumber \\&\quad \times \exp \left[ \frac{1}{2}\left\{ \frac{\left( \mathbf{y}^\top {\pmb \mu }\right) ^2}{\mathbf{y}^\top \mathbf{V}^{-1}{} \mathbf{y}} -{\pmb \mu }^\top {\pmb \mu } \right\} \right] \nonumber \\&\quad \times {M}\left\{ \frac{\mathbf{y}^\top {\pmb \mu }}{\left( \mathbf{y}^\top \mathbf{V}^{-1}{} \mathbf{y}\right) ^{1/2}} \right\} , \end{aligned}$$
(8)

where \(M(\cdot )\) is defined as in (2). Distribution (8) has 5 free parameters, which can be seen by first fixing the 3 free parameters of \({\varvec{\mu }}= (\mu _1, \mu _2, \mu _3)^\top \) then observing that conditions (7) leave 2 degrees of freedom in \({\mathbf {V}}\). Let \(\rho _1, \rho _2, \rho _3\) be the eigenvalues of \({\mathbf {V}}\), with corresponding orthonormal eigenvectors \({\pmb \xi }_1, {\pmb \xi }_2,{\pmb \xi }_3\), respectively. Then, by the spectral decomposition theorem,

$$\begin{aligned} {\mathbf {V}}^{-1}&= \rho _1^{-1} {\varvec{\xi }}_1 {\varvec{\xi }}_1^\top + \rho _2^{-1} {\varvec{\xi }}_2 {\varvec{\xi }}_2^\top + \rho _3^{-1} {\varvec{\xi }}_3 {\varvec{\xi }}_3^\top \nonumber \\&= \rho _1^{-1} {\varvec{\xi }}_1 {\varvec{\xi }}_1^\top + \rho _1 {\varvec{\xi }}_2 {\varvec{\xi }}_2^\top + {\tilde{{\varvec{\mu }}}} {\tilde{{\varvec{\mu }}}}^\top , \end{aligned}$$
(9)

where \({\tilde{{\varvec{\mu }}}} = {\varvec{\mu }}/ \Vert {\varvec{\mu }}\Vert \). The final term in (9) is a consequence of the constraint \({\mathbf {V}} {\varvec{\mu }}= {\varvec{\mu }}\), and \(\rho _2^{-1} = \rho _1\) then follows from \(\text {det}({\mathbf {V}})=1\). Once \({\varvec{\mu }}\) is fixed, then in \({\mathbf {V}}^{-1}\) there is one degree of freedom from \(\rho _1\), and one degree of freedom from fixing the orientation of \({\varvec{\xi }}_1\) and \({\varvec{\xi }}_2\).

Lemma 2

The ESAG density in a \((\kappa ,\beta ,{\varvec{\Gamma }})\) parametrisation, where \({\varvec{\Gamma }} = ({\tilde{{\varvec{\mu }}}} \,\, {\varvec{\xi }}_1 \,\, {\varvec{\xi }}_2 )\), is given by (8) with \({\mathbf {V}} = {\mathbf {V}}(\beta , {\varvec{\xi }}_1, {\varvec{\xi }}_2)\) defined by

$$\begin{aligned} {\mathbf {V}}^{-1}&= {\mathbf {I}} + \beta \left( {\varvec{\xi }}_1 {\varvec{\xi }}_1^\top - {\varvec{\xi }}_2 {\varvec{\xi }}_2^\top \right) \nonumber \\&\quad + \left( \sqrt{\beta ^2 + 1} - 1 \right) \left( {\varvec{\xi }}_1 {\varvec{\xi }}_1^\top + {\varvec{\xi }}_2 {\varvec{\xi }}_2^\top \right) , \end{aligned}$$
(10)

with \(\beta = 2^{-1} \left( \rho _1^{-1} - \rho _1 \right) \)

Lemma 2 follows directly from substituting (10) and \({\varvec{\mu }}= \kappa {\tilde{{\varvec{\mu }}}}\), with \(\kappa \ge 0\) and \({\tilde{{\varvec{\mu }}}} \in {\mathbb {S}}^2\), into (8). Note that \(\beta =0\) in (10) implies isotropy.

Paine et al. (2017) chose to parametrise \({\mathbf {V}}^{-1}\) via

$$\begin{aligned} \tilde{\pmb \xi }_1 \equiv \tilde{\pmb \xi }_1({\pmb \mu }) =\left( -\mu _0^2, \mu _1 \mu _2, \mu _1 \mu _3\right) ^\top \!\! \big /(\mu _0\Vert {\pmb \mu }\Vert ) \end{aligned}$$
(11)

and

$$\begin{aligned} \tilde{\pmb \xi }_2 \equiv \tilde{\pmb \xi }_2({\pmb \mu }) = \left( 0, -\mu _3, \mu _2\right) ^\top \!\! \big /\mu _0, \end{aligned}$$
(12)

where \(\mu _0=(\mu _2^2+\mu _3^2)^{1/2}>0\). Hence, \({\tilde{{\varvec{\xi }}}}_1\) and \({\tilde{{\varvec{\xi }}}}_2\) are unit vectors which are orthogonal to each other and to the mean direction \({\tilde{{\varvec{\mu }}}} = {\varvec{\mu }}/ \Vert {\varvec{\mu }}\Vert \). Each is a function of \({\varvec{\mu }}\) and related to \({\varvec{\xi }}_1\) and \({\varvec{\xi }}_2\) via a rotation

$$\begin{aligned} ( {\varvec{\xi }}_1 \, \, {\varvec{\xi }}_2 ) = ( {\tilde{{\varvec{\xi }}}}_1 \,\, {\tilde{{\varvec{\xi }}}}_2 ) \, {\mathbf {R}}(\psi ), \end{aligned}$$
(13)

where

$$\begin{aligned} {\mathbf {R}}(\psi ) = \begin{pmatrix} \cos \psi &{} -\sin \psi \\ \sin \psi &{} \cos \psi \end{pmatrix}. \end{aligned}$$
(14)

Substituting (13) into (9), and using the fact that \({\varvec{\xi }}_1 {\varvec{\xi }}_1^\top + {\varvec{\xi }}_2 {\varvec{\xi }}_2^\top + {\tilde{{\varvec{\mu }}}} {\tilde{{\varvec{\mu }}}}^\top = {\mathbf {I}}\), where \(\mathbf{I}\) is the identity matrix, leads to

$$\begin{aligned} \mathbf{V}^{-1}&= \mathbf{I}+\gamma _1 \left( \tilde{\pmb \xi }_1 \tilde{\pmb \xi }_1^\top -\tilde{\pmb \xi }_2 \tilde{\pmb \xi }_2^\top \right) +\gamma _2 \left( \tilde{\pmb \xi }_1 \tilde{\pmb \xi }_2^\top +\tilde{\pmb \xi }_2 \tilde{\pmb \xi }_1^\top \right) \nonumber \\&\quad + \left\{ (\gamma _1^2+\gamma _2^2+1)^{1/2}-1 \right\} \left( \tilde{\pmb \xi }_1 \tilde{\pmb \xi }_1^\top +\tilde{\pmb \xi }_2 \tilde{\pmb \xi }_2^\top \right) , \end{aligned}$$
(15)

where

$$\begin{aligned} \begin{pmatrix}\gamma _1 \\ \gamma _2 \end{pmatrix}= 2^{-1} \left( \rho _1^{-1} - \rho _1 \right) \begin{pmatrix} \cos 2 \psi \\ \sin 2\psi \end{pmatrix}; \end{aligned}$$

See Lemma 1 in Paine et al. (2017). The (\({\varvec{\mu }}\), \({\varvec{\gamma }}\)) parametrisation of the density, \(f_\text {ESAG}({\mathbf {y}} | {\varvec{\mu }}, {\varvec{\gamma }})\), is hence given by (8), with \({\mathbf {V}} = {\mathbf {V}}({\varvec{\mu }}, {\varvec{\gamma }})\) defined by (15). An advantage of this parametrisation is that \(\gamma _1\) and \(\gamma _2\) are unconstrained, which is helpful for regression modelling. The isotropic subfamily, IAG, is the special case with \({\pmb \gamma } = (0, 0)^\top \).

2.3 Practical differences between Kent and ESAG distributions

Both the Kent and ESAG distributions have similar characteristics from a modelling perspective: each typically has ellipse-like contours of constant probability density centred on the mean direction in the unimodal case and for different parameter values each has unimodal and bimodal cases. On practical grounds, the two distributions have different advantages and disadvantages. The Kent distribution belongs to the exponential family, and hence, its density, (5), has a simple mathematical form. In comparison, the ESAG density, (8), is rather cumbersome. On the other hand, the ESAG density and likelihood can be computed exactly, whereas the Kent density and likelihood involves a normalising constant, \(C(\kappa , \beta )\) in (5), which is not known in closed form and hence needs to be approximated, by truncating an infinite series (Kent 1982), or else by saddlepoint or holonomic gradient methods (Kume and Sei 2017; Kume et al. 2013). In the present context, we maximise the likelihood for the regression models numerically, so the ESAG likelihood having a cumbersome form is no drawback, and the fact that it can be computed exactly is an advantage. For simulation, the Kent distribution requires a rejection algorithm (Kent et al. 2018), whereas ESAG can be simulated quickly and easily. Fast simulation is especially helpful in simulation-heavy inference procedures, e.g. the parametric bootstrap.

3 Regression model structures

In this section, we specify the two model structures in (3) and (4) and then discuss the advantages and disadvantages of each. It is assumed throughout the paper that the first element of \({\mathbf {x}}_i\) is 1, which is analogous to the inclusion in linear modelling of an “intercept term”. For Structure 2 models, see (4), this means that the simpler model of \(\{ {\mathbf {y}}_i \}\) being IID, i.e. not depending on the covariates, is nested in the general regression model and this is helpful for testing the significance of regression. The motivation for including the intercept term is less clear-cut a priori for Structure 1 models, see (3), though empirical results, for example in Table 2 later, suggest there is sometimes a benefit from doing so.

Each model structure is defined in terms of a preliminary orthogonal transformation, \({\mathbf {Q}}\). For Structure 1 models, \({\mathbf {Q}}\) is assumed to be a population quantity, defined explicitly in the “Appendix”, and estimated by a sample version \(\hat{{\mathbf {Q}}}\). For Structure 2 models, \({\mathbf {Q}}\) is treated as a tuning parameter and optimised with respect to. These preliminary transformations are needed so that desirable invariance and equivariance properties, discussed in the “Appendix”, hold when an arbitrary orthogonal transformation is applied to the \({\mathbf {y}}_i\).

3.1 Structure 1: \({\mathbf {Q}}^\top {\mathbf {y}}_i \sim H(\kappa , \beta , {\varvec{\Gamma }}({\mathbf {x}}_i))\)

In this structure, \({\mathbf {Q}}\) is a population quantity, as defined in the “Appendix”, and in all calculations involving data it is replaced by a sample version \(\hat{{\mathbf {Q}}}=[\hat{\pmb \xi } \hat{\pmb \xi }_1 \hat{\pmb \xi }_2]\) with \(\hat{\pmb \xi }=\sum _{i=1}^n {\mathbf {y}}_i /\vert \vert \sum _{i=1}^n {\mathbf {y}}_i \vert \vert \) and \(\hat{\pmb \xi }_1\) and \(\hat{\pmb \xi }_2\) unit eigenvectors corresponding to the larger and smaller positive eigenvalues of

$$\begin{aligned} \left( \mathbf{I }_3 - \hat{\pmb \xi }\hat{\pmb \xi }^\top \right) \sum _{i=1}^n {\mathbf {y}}_i {\mathbf {y}}_i^\top \left( \mathbf{I }_3 - \hat{\pmb \xi } \hat{\pmb \xi }^\top \right) . \end{aligned}$$

Here, \(\hat{{\mathbf {Q}}}\) is the moment estimator defined in Kent (1982, p. 74) of \({\varvec{\Gamma }}\) in defined in Kent (1982, p. 74) (5) under the assumption of IID \({\mathbf {y}}_i\).

We consider for \({\varvec{\Gamma }}({\mathbf {x}}_i)\) viewed as a function of \({\mathbf {x}}_i\):

$$\begin{aligned} {\varvec{\Gamma }}({\mathbf {x}}_i) = {\mathbf {R}}({\mathbf {x}}_i) \text {diag}[1, {\mathbf {S}}({\mathbf {x}}_i) ], i=1, \ldots , n, \end{aligned}$$
(16)

where the \({\mathbf {R}}({\mathbf {x}}_i)\) are orthogonal 3-by-3 matrices, the \({\mathbf {S}}({\mathbf {x}}_i)\) are orthogonal 2-by-2 matrices, \(\text {diag}[.,.]\) is a 3-by-3 block diagonal matrix with \(1 \times 1\) and \(2 \times 2\) blocks, and \({\mathbf {x}}_i\) is \(q \times 1\).

The dependence of \({\mathbf {R}}({\mathbf {x}}_i)\) and \({\mathbf {S}}({\mathbf {x}}_i)\) on the covariate vector \({\mathbf {x}}_i\) needs to be prescribed. We choose to do so using the Cayley transform: for any skew-symmetric matrix \({\mathbf {A}}\), i.e. \({\mathbf {A}} = - {\mathbf {A}}^\top \), the matrix \(({\mathbf {I}} - {\mathbf {A}})({\mathbf {I}} + {\mathbf {A}})^{-1}\) is a 3-by-3 rotation matrix (i.e. an orthogonal matrix with determinant \(+1\)). The Cayley transform maps the skew-symmetric matrices onto the set of rotations minus a set of lower dimension (see the “Appendix”). This is an injective mapping, which is the reason we favour it over, e.g., the exponential of \({\mathbf {A}}\). Define

$$\begin{aligned} {\mathbf {R}}({\mathbf {x}}_i)&= ({\mathbf {I}} - {\mathbf {A}}_{\text {R},i})({\mathbf {I}} + {\mathbf {A}}_{\text {R},i})^{-1}, \quad \text {and} \nonumber \\ {\mathbf {S}}({\mathbf {x}}_i)&= ({\mathbf {I}} - {\mathbf {A}}_{\text {S},i})({\mathbf {I}} + {\mathbf {A}}_{\text {S},i})^{-1} \end{aligned}$$
(17)

where

$$\begin{aligned} {\mathbf {A}}_{\text {R},i}&= \begin{pmatrix} 0 &{} {\varvec{\beta }}_1^\top {\mathbf {x}}_i &{} {\varvec{\beta }}_2^\top {\mathbf {x}}_i \\ -{\varvec{\beta }}_1^\top {\mathbf {x}}_i &{} 0 &{} {\varvec{\beta }}_3^\top {\mathbf {x}}_i \\ -{\varvec{\beta }}_2^\top {\mathbf {x}}_i &{} -{\varvec{\beta }}_3^\top {\mathbf {x}}_i&{} 0 \end{pmatrix}, \quad \text {and} \nonumber \\ {\mathbf {A}}_{\text {S},i}&= \begin{pmatrix} 0 &{} {\varvec{\beta }}_4^\top {\mathbf {x}}_i \\ -{\varvec{\beta }}_4^\top {\mathbf {x}}_i &{} 0 \end{pmatrix}, \end{aligned}$$
(18)

are skew-symmetric. Here, \({\mathbf {R}}(\cdot )\) and \({\mathbf {S}}(\cdot )\), and hence \({\varvec{\Gamma }}(\cdot )\), are playing a role analogous to link functions in generalised linear models, linking linear predictors to the parameters of the distribution of the response variable. The nature of the link functions means that interpreting the influence of individual \(\beta _j\)s is somewhat harder for this model than for Structure 2 models described below. This model is fitted by maximising the likelihood function of observed data with respect to the 4-by-q parameter matrix \({\mathbf {B}} = \left( {\varvec{\beta }}_1, {\varvec{\beta }}_2, {\varvec{\beta }}_3, {\varvec{\beta }}_4 \right) ^\top \).

3.2 Structure 2: \({\mathbf {Q}}^\top {\mathbf {y}}_i \sim H({\varvec{\mu }}({\mathbf {x}}_i),{\varvec{\gamma }}({\mathbf {x}}_i))\)

In this parametrisation, \({\varvec{\mu }}\in {\mathbb {R}}^3\) and \({\varvec{\gamma }}\in {\mathbb {R}}^2\) are unrestricted, and \({\varvec{\mu }}({\mathbf {x}}_i)\) and \({\varvec{\gamma }}({\mathbf {x}}_i)\) are easy to specify as functions mapping from the q-dimensional domain of the \(\{{\mathbf {x}}_i\}\) to \({\mathbb {R}}^3\) and \({\mathbb {R}}^2\), respectively. Here, we limit attention to linear functions,

$$\begin{aligned} {\varvec{\mu }}({\mathbf {x}}_i)&= \begin{pmatrix} {\varvec{\beta }}_1^\top {\mathbf {x}}_i \\ {\varvec{\beta }}_2^\top {\mathbf {x}}_i \\ {\varvec{\beta }}_3^\top {\mathbf {x}}_i \end{pmatrix} = {\mathbf {B}}_1 {\mathbf {x}}_i, \quad \text {and} \nonumber \\ {\varvec{\gamma }}({\mathbf {x}}_i)&= \begin{pmatrix} {\varvec{\beta }}_4^\top {\mathbf {x}}_i \\ {\varvec{\beta }}_5^\top {\mathbf {x}}_i \end{pmatrix} = {\mathbf {B}}_2 {\mathbf {x}}_i, \end{aligned}$$
(19)

where \({\mathbf {B}}_1 = ({\varvec{\beta }}_1, {\varvec{\beta }}_2, {\varvec{\beta }}_3)^\top \) and \({\mathbf {B}}_2 = ({\varvec{\beta }}_4, {\varvec{\beta }}_5)^\top \). In keeping with the notation of the preceding model, we collect these parameters together into a 5-by-q matrix \(\mathbf{B} = \left( \mathbf{B}_1^\top , \mathbf{B}^\top _2 \right) ^\top \), where the influence of the subsets of parameters can be clearly distinguished: \({\mathbf {B}}_1\) controls the influence of the covariates, via \({\varvec{\mu }}\), on the concentration and mean direction; and \({\mathbf {B}}_2\) controls influence, via \({\varvec{\gamma }}\), on the degree and orientation of anisotropy. This leads to natural tests, e.g. for anisotropy, discussed below.

Unlike in Structure 1, in which model (16) is naturally tied to the particularly defined \({\mathbf {Q}}\), for Structure 2 and model (19) there is no a priori reason to select a particular \({\mathbf {Q}} \in O(3)\); hence, we treat \({\mathbf {Q}}\) as a tuning parameter, seeking to maximise the likelihood of the data \(\left\{ {\mathbf {Q}}^\top {\mathbf {y}}_i \right\} \) over \(\left\{ {\mathbf {Q}}, {\mathbf {B}} \right\} \). A practical way to do so at least approximately is via a brute-force search for \({\mathbf {Q}}\) over O(3), for each value of \({\mathbf {Q}}\) on a grid over O(3) computing the maximum likelihood estimator \(\hat{{\mathbf {B}}}\) of \({\mathbf {B}}\), then selecting the pair \(\left\{ {\mathbf {Q}}, \hat{{\mathbf {B}}} \right\} \) corresponding to the largest maximised likelihood. In this paper, when comparing models for a particular data set, we compute \({\mathbf {Q}}\) for the most general ESAG2 model and keep this \({\mathbf {Q}}\) fixed for submodels and Kent2 models.

Model (19) with ESAG errors,

$$\begin{aligned} {\mathbf {y}}_i \sim \text {ESAG}({\mathbf {B}}_1 {\mathbf {x}}_i, {\mathbf {B}}_2 {\mathbf {x}}_i), \end{aligned}$$
(20)

is close in spirit to the circular \(p=2\) regression models of Presnell et al. (1998) and Wang and Gelfand (2013), particularly in the isotropic special case, with \({\mathbf {B}}_2={\mathbf {0}}\), in which this model is a direct analogue for \(p=3\) of Presnell et al.’s regression model on the circle.

A helpful property proved by Presnell et al. in the circular case is that the log-likelihood function is a concave function of the regression parameters—in our notation \({\mathbf {B}}_1\)—that determine \({\varvec{\mu }}\); this guarantees that the MLE of \({\mathbf {B}}_1\) is unique and easily determined by numerical optimisation. The corresponding result holds for ESAG2 (20) in the \(p=3\) case with isotropic errors, i.e. \({\mathbf {B}}_2 = {\mathbf {0}}\), as follows [in which \({\text {vec}}\) is the standard vectorisation operator; see e.g. Mardia et al. (1979)].

Proposition 1

Consider model (20), let \({\mathbf {B}}_2 = {\mathbf {0}}\), and let \(l({\mathbf {B}}_1)\) denote the log of the likelihood function for parameter \({\mathbf {B}}_1\) given observations \(({\mathbf {x}}_1,{\mathbf {y}}_1), \ldots , ({\mathbf {x}}_n,{\mathbf {y}}_n)\). Provided \(\left( {\mathbf {x}}_1, \dots , {\mathbf {x}}_n \right) ^\top \) has full rank, the negative Hessian

$$\begin{aligned} -\frac{\partial ^2 l({\mathbf {B}}_1)}{\partial {\text {vec}} {\mathbf {B}}_1 \partial {\text {vec}} {\mathbf {B}}_1^\top } \end{aligned}$$

is positive definite, hence \(l({\mathbf {B}}_1)\) is a concave function of \({\mathbf {B}}_1\), and the MLE of \({\mathbf {B}}_1\) is unique.

The proof of this Proposition is given in the “Appendix”.

3.3 Tests for the significance of anisotropy and regression

In this section, we discuss procedures for performing hypothesis tests required for model selection and inference. To do so, we introduce the notation for the parameters \({\mathbf {B}} = \left( {\varvec{\beta }}^{(1)}, {\varvec{\beta }}^{(2)}, \ldots ,{\varvec{\beta }}^{(q)}\right) \), i.e. such that \({\varvec{\beta }}^{(j)}\) is the jth column of \({\mathbf {B}}\) and corresponds to the covariate appearing as the jth element of \({\mathbf {x}}_i\). A test of the significance of this particular covariate corresponds to a test with null and alternative hypotheses

$$\begin{aligned} \text {H}_0{:} {\varvec{\beta }}^{(j)} = {\mathbf {0}} \quad \text {versus} \quad \text {H}_1{:} {\varvec{\beta }}^{(j)} \text { free}. \end{aligned}$$

Since the null hypothesis is nested in the alternative then by Wilks’ theorem, subject to the usual regularity conditions, under \(\text {H}_0\),

$$\begin{aligned} T = -2 \log \left( L_0/L_1 \right) \sim \chi ^2_\nu , \end{aligned}$$
(21)

asymptotically as \(n \rightarrow \infty \), where \(L_0\) and \(L_1\) are the maximised likelihood functions under \(\text {H}_0\) and \(\text {H}_1\), respectively; and \(\nu \) is the difference in the number of free parameters between \(\text {H}_0\) and \(\text {H}_1\), here equal to 4 or 5 for the Structure 1 and 2 models, respectively. The significance of the parameter can be assessed by referring the observed test statistic, T, to the \(\chi ^2_\nu \) distribution. An alternative possibility, preferable when n is insufficiently large for the null asymptotic distribution (21) to be reasonable, is to approximate the null distribution using a bootstrap procedure.

Within Structure 2 models, it may be relevant to consider whether particular covariates are significant in \({\varvec{\mu }}\) or \({\varvec{\gamma }}\) distinctly. For example, for the covariate corresponding to the jth element of \({\mathbf {x}}_i\), a test that the covariate is significant in \({\varvec{\gamma }}\) corresponds to the hypotheses

$$\begin{aligned}&\text {H}_0{:} ({\varvec{\beta }}^{(j)})_4 = ({\varvec{\beta }}^{(j)})_5 = {0} \quad \text {versus} \nonumber \\&\text {H}_1{:} ({\varvec{\beta }}^{(j)})_4, \, ({\varvec{\beta }}^{(j)})_5 \text { free}, \end{aligned}$$
(22)

for which the degrees of freedom in (21) is \(\nu = 2\). Having isotropic errors corresponds to \({\varvec{\gamma }}= {\mathbf {0}}\), so for a test of the significance of anisotropy the hypotheses are \(\text {H}_0{:} {\mathbf {B}}_2 = {\mathbf {0}}\) versus \(\text {H}_1{:} {\mathbf {B}}_2\) free, where \({\mathbf {B}}_2\) is as defined in (19), and \(\nu = 2 q\).

4 Residuals for model diagnostics

For spherical regression models, there are many possible ways to define a residual. Here, we describe some general spherical residuals defined by Jupp (1988) before defining some particular model-based residuals for regression models with ESAG and Kent errors.

For observations \({{\mathbf {y}}}_1, \ldots , {{\mathbf {y}}}_n\) denote the fitted values by \(\hat{{\mathbf {y}}}_1, \ldots , \hat{{\mathbf {y}}}_n\). Jupp defined “crude residuals” as

$$\begin{aligned} {\mathbf {r}}_i = \left( {\mathbf {I}} - \hat{{\mathbf {y}}}_i \hat{{\mathbf {y}}}_i^\top \right) {{\mathbf {y}}}_i, \end{aligned}$$

i.e. as projections of each observation, \({\mathbf {y}}_i\), into the tangent plane at its fitted value, \(\hat{{\mathbf {y}}}_i\). Since the \({\mathbf {r}}_1, \ldots ,{\mathbf {r}}_n\) lie in different tangent planes, Jupp defined the “rotated residuals”

$$\begin{aligned} {\mathbf {s}}_i = {\mathbf {R}}(\hat{{\mathbf {y}}}_i,{\mathbf {y}}_0) {\mathbf {r}}_i, \end{aligned}$$
(23)

where \({\mathbf {y}}_0\) is an arbitrary point on the sphere which is not dependent on i, and \({\mathbf {R}}(\hat{{\mathbf {y}}}_i,{\mathbf {y}}_0)\) is a rotation from \(\hat{{\mathbf {y}}}_i\) to \({\mathbf {y}}_0\), where \({\mathbf {R}}(\cdot ,\cdot )\) does not depend on i. Then, the \({\mathbf {s}}_1, \ldots , {\mathbf {s}}_n\) lie in the plane tangent to the sphere at \({\mathbf {y}}_0\). Let \({\varvec{\zeta }}_1, {\varvec{\zeta }}_2\) be an arbitrary pair of unit vectors orthogonal to each other and to \({\mathbf {y}}_0\), then a plot of the projected residuals

$$\begin{aligned} {\mathbf {t}}_i = \begin{pmatrix} {\varvec{\zeta }}_1^\top \\ {\varvec{\zeta }}_2^\top \end{pmatrix} {\mathbf {s}}_i, \end{aligned}$$
(24)

can be inspected to identify structure amongst residuals that could indicate a shortcoming of the model.

For parametric regression models with ESAG or Kent errors, Jupp’s residuals are potentially limited in that they are not model-based and hence do not take into account the dispersion of errors in the fitted model, i.e. (23) is a function of the fitted value \(\hat{{\mathbf {y}}}_i\) but not of the parameters that determine dispersion.

Fig. 1
figure 1

Data and residuals for the model described in Sect. 5.1: a A data set with \(n=41\) plotted on the sphere; points with indices \(i = 1,11,21,31,41\) are marked in red with corresponding 90%-coverage contours of ESAG\(({\varvec{\mu }}_i,{\varvec{\gamma }}_i)\), shown in red for the true parameters and black for the fitted. b the same data projected into the tangent plane at the sample mean, and c\({\varvec{\eta }}\)-residuals (25) for the fitted model. d, e, respectively, show \({\varvec{\eta }}\)-residuals and Jupp residuals (24) for a larger data set of \(n=401\) data points from the same model

We define model-based residuals for ESAG and Kent error models as follows, in each case motivated by high-concentration Gaussian limits of each distribution, although we expect the residuals to be useful for detecting model inadequacy even in non high-concentration settings. For a random variable \({\mathbf {y}} \sim \text {ESAG}({\varvec{\mu }}, {\varvec{\gamma }})\) consider the corresponding random variable

$$\begin{aligned} {\varvec{\eta }}_\text {ESAG}({\mathbf {y}}; {\varvec{\mu }}, {\varvec{\gamma }}) = \begin{pmatrix} \eta _1 \\ \eta _2 \end{pmatrix} = \Vert {\varvec{\mu }}\Vert \begin{pmatrix} \rho _1^{-1/2} \varvec{\xi }_1^\top \\ \rho _1^{1/2} \varvec{\xi }_2^\top \\ \end{pmatrix} {\mathbf {y}}, \end{aligned}$$

where \(\rho _1, \varvec{\xi }_1, \varvec{\xi }_2\) are as defined in (9). From Proposition 2 in Paine et al. (2017), provided \(\Vert {\varvec{\mu }}\Vert \) is large then, approximately, \({\varvec{\eta }}_\text {ESAG} \sim N_2({\mathbf {0}}, {\mathbf {I}})\). Hence for regression models with ESAG errors, we define residuals

$$\begin{aligned} {\varvec{\eta }}_{i} = {\varvec{\eta }}_{\text {ESAG}}({\mathbf {y}}_i; {\hat{{\varvec{\mu }}}}_i, {\hat{{\varvec{\gamma }}}}_i) \text { for } i=1, \ldots ,n, \end{aligned}$$
(25)

where \({\hat{{\varvec{\mu }}}}_i = \hat{{\mathbf {B}}}_1 {\mathbf {x}}_i\) and \({\hat{{\varvec{\gamma }}}}_i = \hat{{\mathbf {B}}}_2 {\mathbf {x}}_i\). Then, a scatterplot of \({\hat{{\varvec{\eta }}}}_1, \ldots , {\hat{{\varvec{\eta }}}}_n\) can be compared with random \(N_2({\mathbf {0}}, {\mathbf {I}})\) scatter; see Fig. 1 for examples.

Table 1 Results from fitting various models to the synthetic data, which were generated from model M\(_1\) with H taken to be ESAG, \(n=41\), and using parameters described in Sect. 5.1

Similarly, for a random variable \({\mathbf {y}} \sim \text {Kent}(\kappa ,\beta ,{\varvec{\Gamma }})\) then

$$\begin{aligned} {\varvec{\eta }}_\text {Kent}({\mathbf {y}}; \kappa , \beta , {\varvec{\Gamma }}) = \sqrt{\kappa } \begin{pmatrix} \beta ^{-1/2} \varvec{\xi }_1^\top \\ \beta ^{1/2} \varvec{\xi }_2^\top \\ \end{pmatrix} {\mathbf {y}} \sim N_2({\mathbf {0}}, {\mathbf {I}}), \end{aligned}$$

approximately, for large \(\kappa \); see property (e) in Kent (1982). For models with Kent errors, writing \({\hat{{\varvec{\Gamma }}}}_i\) = \({\hat{{\varvec{\Gamma }}}}({\mathbf {x}}_i)\), we hence define the residuals

$$\begin{aligned} {\varvec{\eta }}_i = {\varvec{\eta }}_\text {Kent}({\mathbf {y}}; {\hat{\kappa }}, {\hat{\beta }}, {\hat{{\varvec{\Gamma }}}}_i). \end{aligned}$$

5 Applications

Here, we consider three applications, in each investigating the spherical regression models towards different statistical goals.

The first involves a simulated data set with a scalar covariate, \(t\in {\mathbb {R}}\). We exploit having a simple data-generating model to illustrate the flexibility within this regression framework for the mean direction and dispersion to depend on the covariate; to investigate the performance of hypothesis tests in detecting anisotropy and regression; and to compare Jupp and \({\varvec{\eta }}\)-residuals in the special setting where the model being fitted is the true one.

The second data set concerns the movement of clouds between two consecutive days. The cloud shapes are represented by landmarks spaced around the cloud outline, and the position of these landmarks is regressed on their positions the previous day. This data set has been considered previously in the context of spherical–spherical regression models with isotropic errors (Rosenthal et al. 2014); hence, it makes for an interesting comparison with the more general framework developed in this paper.

The third data set is derived from vectorcardiogram measurement of heart activity in children. These data too have been studied in the context of spherical–spherical isotropic regression (Chang 1986), but with the non-spherical covariates disregarded. The primary goal is inference, to understand which covariates are significantly related to the response. The framework of the present paper enables us to incorporate easily the additional non-spherical covariates, as well as anisotropic errors, and furthermore then to test formally whether such generalisations are warranted by the data.

5.1 Simulated data set (involving a scalar covariate)

Denote by \(M_1\) the ESAG2 regression model with \({\varvec{\mu }}_i = (1-t_i) {\varvec{\mu }}^{(0)} + t_i {\varvec{\mu }}^{(1)}\), \(\varvec{\gamma }_i = (1-t_i) \varvec{\gamma }^{(0)} + t_i \varvec{\gamma }^{(1)}\), and \(t_i = (i-1)/(n-1)\) for \(i=1, \ldots ,n\). In the notation of (20), \({\mathbf {B}}_1 = \left( {\varvec{\mu }}^{(0)}, {\varvec{\mu }}^{(1)} - {\varvec{\mu }}^{(0)} \right) \), \({\mathbf {B}}_2 = \left( {\varvec{\gamma }}^{(0)}, {\varvec{\gamma }}^{(1)} - {\varvec{\gamma }}^{(0)} \right) \), and \({\mathbf {x}}_i = (1, t_i)^\top \). Figure 1a–c is plot for a synthetic data set generated from \(M_1\) using \({\varvec{\mu }}^{(0)} = (5, 10, 2)^\top \), \({\varvec{\mu }}^{(1)} = (-5, 10, 2)^\top \), \({\varvec{\gamma }}^{(0)} = (2, 3)^\top \), \({\varvec{\gamma }}^{(1)} = (-2, 5)^\top \), and \(n=41\). As a visual aid, plot markers corresponding to the subset of points with indices \(i = 1,11,21,31,41\) are coloured red. Figure 1a shows the data, together with contours of constant probability density with 90% coverage for the true and fitted parameters, for the covariates corresponding to the red-marked points. The data-generating parameters are deliberately chosen here to produce highly anisotropic dispersion, as can be seen from the highly eccentric contours. These contours are well matched by corresponding contours of the fitted model, indicating that the parameters have been estimated well. Figure 1b shows the same data projected onto the tangent plane at the sample mean, with the index used as the point marker so that the ordering of the points can be seen. Figure 1c is a plot of \({\varvec{\eta }}\)-residuals, which seem consistent with IID bivariate normal scatter, indicating that the model is reasonable. This is expected since the data-generating model is a special case of the model being fitted. Exploring residuals further, Fig. 1d shows residuals analogous to those in c but this time for a larger sample size of \(n=401\), with the corresponding Jupp projecting residuals (24) shown in e. The Jupp residuals appear non-Gaussian and anisotropic, even though the fitted model is appropriate to the data, making these residuals harder in general to interpret for model diagnostics.

Fig. 2
figure 2

a The cloud formation data described in Sect. 5.2. The red points are landmarks on the outline of the cloud on a particular day, and the yellow points connected by blue lines are the corresponding landmarks on the following day. In the regression, we treat the red points as covariates and the yellow points as the response. b\({\varvec{\eta }}\)-residuals (25) for the fitted ESAG2 model; the residuals are numbered clockwise starting from the point indicated in a. The table in c results from fitting various models to the cloud data. For the PLT model, \({\mathbf {A}} \in \text {SL}(3)\), and the covariate vector is \({\mathbf {u}}_i \in {\mathbf {S}}^2\), without an “intercept” element included. (Color figure online)

We can use the inference procedures described in Sect. 3.3 to test for significance of anisotropy and regression. Table 1 shows the maximised log-likelihood for the true model, \(M_1\), and some different models involving various combinations of the two model structures and two error distributions. Using Wilks’ statistic and the null asymptotic \(\chi ^2\) approximation (21) to compare \(M_1\) with each of models \(M_2\), \(M_3\), \(M_4\) with errors assumed to be ESAG results in p values \(<10^{-5}\) in each case, indicating very strong evidence to favour the data-generating model over the simpler alternatives, which include the isotropic (\(M_3\)) and IID (\(M_4\)) models. When Kent errors are assumed for the fitted model, i.e. in contrast to the ESAG errors used in generating the data, the statistical conclusions (and even to some extent the numerical values of the maximised log-likelihoods) are very robust to this misspecification. This is probably a consequence of how similar the ESAG and Kent densities are in practise, especially if the concentration is reasonably high. The table also shows the results of fitting Structure 2 models \(M_5\)\(M_8\) to the Structure 1-generated data. Here, model \(M_5\) is not favoured strongly over \(M_6\), in contrast to how \(M_1\) is strongly favoured over \(M_2\). The explanation is that models \(M_2\) and \(M_6\) are only loosely analogous as submodels of \(M_1\) and \(M_5\), respectively. A major difference is that \(M_2\) cannot capture the way the orientation of the anisotropy substantially depends on the covariate, because \({\varvec{\gamma }}\) does not depend on the covariate, whereas \(M_6\) can still do so via \({\mathbf {R}}({\mathbf {x}}_i)\) even when \({\mathbf {S}}({\mathbf {x}}_i)\) is fixed to be the identity matrix. The conclusion to reject isotropy (\(M_7\)) and the assumption of IID data (\(M_8\)) in favour of \(M_5\) are both robust to the model misspecification.

5.2 Cloud formation data (involving a spherical covariate)

These data involve 29 landmarks spaced around the outline of a cloud to represent its shape on each of two consecutive days, 4th and 5th Sept 2012. The data, see Fig. 2, are from NASA’s Visible Earth project [with original cloud images from XPlanet (2018)] and were used as an application by Rosenthal et al. (2014) in assessing accuracy of their PLT model, albeit with a focus on prediction rather than inference. The goal is to regress the landmarks \(\{{\mathbf {y}}_i\}_{i=1}^{29}\) for the second day on those \(\{{\mathbf {u}}_i\}_{i=1}^{29}\) of the first. We hence define a covariate vector, including “intercept”, as \({\mathbf {x}}_i = \left( 1, \, {\mathbf {u}}_i^\top \right) ^\top \).

The models we fitted to these data and the corresponding values of the maximised log-likelihood, are shown in Fig. 2c. The maximised log-likelihood values show that each of the models with anisotropic errors is very strongly favoured over its isotropic counterpart. The non-nestedness of the models otherwise makes them hard to select between formally. Model ESAG2 has substantially the largest log-likelihood value, although recall that the transformation \(\varvec{Q}\) used for the Structure 2 models is chosen specifically to maximise the ESAG2 log-likelihood.

Table 2 Results for Structure 1 models and submodels fitted to the vectorcardiogram data shown in Fig. 3, and described in Sect. 5.3
Table 3 Results for Structure 2 models and submodels fitted to the vectorcardiogram data

The residuals of the fitted ESAG2 model, shown in Fig. 2b, show a small amount of serial correlation (points 21–27), but otherwise little to suggest the model is poorly fitting.

5.3 Vectorcardiogram data (involving a mixed-type covariate)

This data set was considered by Chang (1986) in the context of his spherical–spherical regression models. Here, our more general model enables incorporation of other covariates, and of anisotropic errors.

The data themselves are derived from vectorcardiogram measurements of the electrical activity of the heart of children of different ages and genders. The vectorcardiogram involves three leads being connected to the torso produce a time-dependent vector that traces approximately closed curves, each representing a heartbeat cycle, in \({\mathbb {R}}^3\). Sometimes used as a summary for clinical diagnosis is a unit vector defined as the directional component of the vector at a particular extremum across the cycles. The data comprise such unit vectors derived from data for two different lead placement systems, the Frank system (\({\mathbf {y}}_i \in {\mathbb {S}}^2\)) and for the McFee system (\({\mathbf {u}}_i \in {\mathbb {S}}^2\)), for each of 98 children of different ages and gender. Age is represented by a binary variable \(A_i \in \{0,1\}\) (0 meaning aged 2–10 years, and 1 meaning aged 11–18 years) and gender by a variable \(G_i \in \{0,1\}\) (0 for a boy, and 1 for a girl). We aim to regress \({\mathbf {y}}_i\) on the other variables, and hence take the covariate to be \({\mathbf {x}}_i = \left( 1, \, {\mathbf {u}}_i^\top , \, A_i, \, G_i \right) ^\top \), for \(i=1, \ldots , 98\).

To identify the meaning of the parameters in the parameter matrix \({\mathbf {B}}\), we write it in the block structure

$$\begin{aligned} \mathbf{B} = \begin{pmatrix} {\pmb \beta }_{1}^0 &{} \mathbf{B}^u_{1} &{} {\pmb \beta }^A_{1} &{} {\pmb \beta }^G_{1} \\ {\pmb \beta }_{2}^0 &{} \mathbf{B}^u_{2} &{} {\pmb \beta }^A_{2} &{} {\pmb \beta }^G_{2} \end{pmatrix}, \end{aligned}$$
(26)

where \({\pmb \beta }^0_1\), \({\pmb \beta }_1^A\) and \({\pmb \beta }_1^G\) are \(3 \times 1\) and \(\mathbf{B}_1^u\) is \(3 \times 3\); and \({\pmb \beta }^0_2\), \({\pmb \beta }_2^A\) and \({\pmb \beta }_2^G\) are \(s \times 1\) and \(\mathbf{B}_2^u\) is \(s \times 3\), where \(s=1\) for Structure 1 models and \(s=2\) for Structure 2 models. Setting any of these blocks equal to zero amounts to removing the influence of a particular covariate on particular parameters in the error model. For example, in Structure 2 models, setting \({\pmb \beta }_1^A = {\mathbf {0}}\) means that the covariate \(A_i\) does not influence \({\varvec{\mu }}\).

Fig. 3
figure 3

a The vectorcardiogram data described in Sect. 5.3. Red and yellow markers, respectively, indicate covariates, \(\{{\mathbf {u}}_i\}\), and responses, \(\{{\mathbf {y}}_i\}\), and the blue lines indicate the pairings. b The \({\varvec{\eta }}\)-residuals for fitted ESAG2 models \(M_3\) (the preferred model) and \(M_{15}\) (the model in which age and gender covariates are ignored and errors are assumed isotropic). Red point markers denote girls and blue denotes boys; diamonds denote the 2–10 age group and crosses denote the 11–19 age group. (Color figure online)

Tables 2 and 3, respectively, show the results of fitting Structure 1 and 2 models, and several submodels, to the vectorcardiogram data. Within each table, each of the submodels is nested within \(M_1\), and some of the submodels are further nested within each other. Pairwise comparisons of relevant nested models using likelihood ratio tests described in Sect. 3.3 at 5% level suggest that for both ESAG1 and Kent1 the preferred model is \(M_{15}\). This suggests that Structure 1 is poor for characterising how the response depends on the covariates for this application, to the extent that there is little benefit to retaining the covariates in the model. In contrast, for both ESAG2 and Kent2 the preferred model is \(M_3\), which retains all of the covariates.

Figure 3b shows \({\varvec{\eta }}\)-residuals for ESAG2 models \(M_3\) and \(M_{15}\). For \(M_3\), which is the preferred model, the residuals are reasonably consistent with \(N_2({\mathbf {0}},{\mathbf {I}})\) scatter. For \(M_{15}\), which assumes isotropic errors and neglects the age and gender covariates, the scatter appears less isotropic, and there are slight differences in the scatter according to age and gender, consistent with there being residual variation due to these factors not being incorporated in the model.

6 Conclusions

The regression models we have introduced are rather more general than existing regression models in the literature, allowing covariates with general structure, and errors that are nonisotropic. We have also introduced novel model-based residuals that enable simple visual diagnostics to check fitted models, to identify for example any residual structure dependent on a covariate, any serial correlation or any outliers, and to explore adequacy of the error models.

For the anisotropic error model, there is little to choose on statistical grounds between using Kent or ESAG, though we have found occasions for models based on the Kent that the likelihood function is harder to maximise numerically (perhaps owing to roughness in the likelihood approximation arising from approximating the normalising constant). Models based on ESAG are free from this issue, and the computation of the ESAG likelihood is much faster. Of the two model structures we considered, models with Structure 2 tended to perform better; such models are also simpler and enable the influence of particular covariates to be related more directly to the response variable. On the foregoing grounds, ESAG2 models are our preferred ones.

The likelihood framework in which we have developed the models makes it very easy to use classical methods to compare nested models of different complexity, in particular to test hypotheses about significance of regression or the anisotropy of errors. Indeed, applying such tests for the examples considered provides strong support that the regression modelling generalisations we have developed are warranted.