When is there a Representer Theorem? Nondifferentiable Regularisers and Banach spaces

We consider a general regularised interpolation problem for learning a parameter vector from data. The well known representer theorem says that under certain conditions on the regulariser there exists a solution in the linear span of the data points. This is the core of kernel methods in machine learning as it makes the problem computationally tractable. Necessary and sufficient conditions for differentiable regularisers on Hilbert spaces to admit a representer theorem have been proved. We extend those results to nondifferentiable regularisers on uniformly convex and uniformly smooth Banach spaces. This gives a (more) complete answer to the question when there is a representer theorem. We then note that for regularised interpolation in fact the solution is determined by the function space alone and independent of the regulariser, making the extension to Banach spaces even more valuable.

In particular regularisation in Hilbert spaces has been studied in the literature for various reasons. First of all the existence of inner products allows for the design of algorithms with very clear geometric intuitions often based on orthogonal projections or the fact that the inner product can be seen as a kind of similarity measure. But in fact crucial for the success of regularisation methods in Hilbert spaces is the well known representer theorem which states that for certain regularisers there is always a solution in the linear span of the data points (Kimeldorf and Wahba [8], Cox and O'Sullivan [3], Schölkopf and Smola [17,14]). This means that the problem reduces to finding a function in a finite dimensional subspace of the original function space which is often infinite dimensional. It is this dimension reduction that makes the problem computationally tractable. Another reason for Hilbert space regularisation finding a variety of applications is the kernel trick which allows for any algorithm which is formulated in terms of inner products to be modified to yield a new algorithm based on a different symmetric, positive semidefinite kernel leading to learning in reproducing kernel Hilbert spaces (Schölkopf and Smola [15], Shawe-Taylor and Cristianini [16]). This way nonlinearities can be introduced in the otherwise linear setup. Furthermore kernels can be defined on input sets which a priori do not have a mathematical structure by embeddings into a Hilbert space.
When we are speaking of regularisation we are referring to Tikhonov regularisation, i.e. an optimisation problem of the form min E((⟨f, where H is a Hilbert space, {(x i , y i ) ∶ i ∈ N m } ⊂ H × Y is a set of given input/output data with Y ⊆ R, E∶ R m × Y m → R is an error function, Ω ∶ H → R a regulariser and λ > 0 is aregularisation parameter. Argyriou, Micchelli and Pontil [1] show that under very mild conditions this regularisation problem admits a linear representer theorem if and only if the regularised interpolation problem min {Ω(f ) ∶ f ∈ H, ⟨f, x i ⟩ H = y i ∀i = 1, . . . , m} admits a linear representer theorem. They argue that we can thus focus on the regularised interpolation problem which is more convenient to study. It is easy to see that their argument holds for the more general setting of the problem which we are going to introduce in this paper so we are going to take the same viewpoint in this paper and consider regularised interpolation.
embedded into a Hilbert space. Having a large amount of Banach spaces for potential embeddings may help to overcome this problem. Analogous to learning in reproducing kernel Hilbert spaces the generalisation to Banach spaces allows for learning in reproducing kernel Banach spaces which have been introduced by Zhang, Xu and Zhang [18]. Our results regarding the existence of representer theorems are in line with Zhang and Zhang's work on representer theorems for reproducing kernel Banach spaces [19]. But as we will show at the end of this paper the variety of spaces to pose the problem in is of even greater importance. It is often said that the regulariser favours solutions with a certain desirable property. We will show that in fact for regularised interpolation when we rely on the linear representer theorem it is essentially the choice of the space, and only the choice of the space not the choice of the regulariser, which determines the solution.
It is well known that non-decreasing functions of the Hilbert space norm admit a linear representer theorem. Argyriou, Micchelli and Pontil [1] showed that this condition is not just necessary but for differentiable regularisers also sufficient. In this paper we remove the differentiablity condition and show that any regulariser on a uniformly convex and uniformly smooth Banach space that admits a linear representer theorem is in fact very close to being radially symmetric, thus giving a (more) complete answer to the question when there is a representer theorem. Before presenting those results we present the necessary theory of semi-inner products to generalise the Hilbert space setting considered by Argyriou, Micchelli and Pontil to Banach spaces.
In section 2 we will introduce the notion of semi-inner products as defined by Lumer [11] and later extended by Giles [6]. We will state the results without proofs as they mostly are not difficult and can be found in the original papers.
Another extensive reference about semi-inner products and their properties is the work by Dragomir [5]. After introducing the relevant theory we will present the generalised regularised interpolation problem in section 3, replacing the inner product in eq. (1) by a semi-inner product. We then state one of the main results of the paper that regularisers that admit a representer theorem are almost radially symmetric in a way that will be made precise in the statement. Before giving the proof of the theorem we state and prove two essential lemmas capturing most of the important structure of the problem to prove the theorem. We finish the section by giving the proof of the main result. Finally in section 4 we prove that in fact for admissible regularisers there is a unique solution of the regularised interpolation problem in the linear span of the data and it is independent of the regulariser. This in particular means that we may choose the regulariser which is most suitable for our task at hand without changing the solution.

Notation
Before the main sections we briefly introduce some notation used throughout the paper. We use N m as a shorthand notation for the set {1, . . . , m} ⊂ N. We will assume we have m data points {(x i , y i ) ∶ i ∈ N m } ⊂ B × Y , where B will always denote a uniformly convex, uniformly smooth real Banach space and Y ⊆ R.
Typical examples of Y are finite sets of integers for classification problems, e.g. {−1, 1} for binary classification, or the whole of R for regression. We briefly recall the definitions of a Banach space being uniformly convex and uniformly smooth, further details can be found in [2,10,9].

Definition 1.2 (Uniformly smooth Banach space)
A normed vector space V is said to be uniformly smooth if for every

Remark 1.3
There are two equivalent conditions of uniform smoothness which we will make use of in this paper.
(i) The modulus of smoothness of the space V is defined as Now V is uniformly smooth if and only if ρ V (δ) δ → δ→0 0.
(ii) The norm on V is said to be uniformly Fréchet differentiable if the limit lim t→0 x + t ⋅ y V − x V t exists uniformly for all real t and x, y ∈ V with x V = y V = 1. The space V is uniformly smooth if its norm is uniformly Fréchet differentiable.
We always write H to denote a Hilbert space and for the first part of section 2 we will be speaking of general normed linear spaces denoted by V . Once we have seen the reasons to require the space to be a uniformly convex and uniformly smooth Banach space the remainder of section 2 and the paper will consider such spaces denoted by B. When only the norm ⋅ B on B is considered the subscript will often be omitted for simplicity. Throughout we will denote the inner product on a Hilbert space by ⟨⋅, ⋅⟩ H and a semi-inner product on a normed linear space by [⋅, ⋅] V .

Semi-inner product spaces
There are various definitions of semi-inner products aiming to generalise Hilbert space methods to more general cases. The notion of semi-inner products we are going to use was first introduced by Lumer [11] and further developed by Giles [6]. In comparison to inner products the assumption of (conjugate) symmetry, or equivalently additivity in the second argument, is dropped. This means that we need to assume the Cauchy-Schwarz inequality to make sure that it holds as it is crucial for the semi-inner products to have inner-product like behaviour. In the original definition Lumer did not assume homogeneity in the second argument but Giles argued that one can assume it without any significant restrictions. We will hence be including homogeneity in our assumptions. An extensive overview of the theory of this and other notions of semi-inner products can be found in Dragomir [5].
In this section only we state all results for real or complex vector spaces as all of them are valid for the complex case. Throught this section we will thus denote the field by F. In the subsequent sections where we present the main contributions of this paper we will return to real vector spaces as it is at this point not clear whether the results remain valid for complex vector spaces.

Definition 2.1 (Semi-inner product)
A semi-inner product (s.i.p.) on a real or complex vector space V is a map [⋅, ⋅] V ∶ V × V → F with the following properties: (i) Linearity in the first argument: Conversely every norm ⋅ V on a linear space V is induced by at least one semi-inner product, i.e. there exists at least one semi-inner product This means that every normed linear space is a s.i.p. space. Consequently we say that an s.i.p. space V is uniformly convex if the norm induced by [⋅, ⋅] V is uniformly convex and the s.i.p. space is uniformly smooth if the induced norm is uniformly smooth. The semi-inner product inducing the norm is not unique in general though. It turns out that we have uniqueness if the norm is differentiable which is closely linked to a weak continuity property in the second argument of the inducing semi-inner product.
uniformly for every x, y ∈ V with x V = y V = 1 as R ∋ t → 0. Furthermore the differential of the norm for x ≠ 0 is given by This in particular means that the semi-inner product inducing a uniformly Fréchet differentiable norm is unique.
The existence of a semi-inner product allows us to define a notion of orthogonality analogous to orthogonality in Hilbert spaces by requiring the semi-inner product to be zero. The lack of symmetry of the semi-inner product thus means that our notion of orthogonality is not symmetric in general and x normal to y does not imply that y is normal to x.
Various generalisations of orthogonality have been developed which are equivalent conditions to the inner product being zero in a Hilbert space but generalise to normed linear spaces. One of these notions of orthogonality is James orthogonality [7]. The equivalence of James orthogonality with the inner product being zero in a Hilbert space generalises to smooth Banach spaces in which James orthogonality is equivalent to the unique semi-inner product being zero. James states that his definition is closely related to linear functionals and hyperplanes which is essential for our applications as we will see in the main part of the paper.

Proposition 2.4 (James orthogonality)
In a uniformly smooth s.i.p. space semi-inner product orthogonality is equivalent to James orthogonality, namely for x, y ∈ V This relation to James orthogonality also helps to get a geometric understanding of what orthogonality means in a s.i.p. space. From proposition 2.4 it is immediately clear that x being normal to y means that the vector y is tangent to the ball B(0, x ) at the point x, where B(0, x ) is the ball of radius x centred at the origin.
Having defined what it means to be orthogonal to a linear subspace we can also define the orthogonal complement of a subspace. It will become clear later that this definition coincides with the usual definition of orthogonal complements in Banach spaces via the dual space.

Definition 2.5 (Orthogonal Complement)
Let V be a s.i.p. space and U a closed linear subspace. Then the orthogonal complement of U is defined to be If the space is a uniformly convex Banach Space it is not difficult to see that there is a unique orthogonal decomposition for every x ∈ V . This is because it is known that in a uniformly convex space there is a unique closest point in a closed linear subspace and one easily checks that this immediately leads to a unique orthogonal decomposition.
Proposition 2.6 (Orthogonal Decomposition) Let V be a uniformly convex s.i.p. space. Then for any closed linear subspace U ⊂ V there exists a unique orthogonal decomposition, more precisely for any x ∈ V there exists a unique x 0 ∈ U and a unique x ⊥ ∈ U ⊥ such that Under these assumptions we are also able to establish a Riesz representation theorem using the semi-inner product.

Theorem 2.7 (Riesz representation theorem)
Let V be a uniformly convex, uniformly smooth s.i.p. space. Then for every f ∈ V * , the continuous dual space of V , there exists a unique vector y ∈ V such that This theorem is crucial for the development of the theory in this paper as it means that the duality map x ↦ x * given by is an isometric isomorphism from V to V * . It is essential to note that this map is linear if and only if V is a Hilbert space.
Summarizing the above results we see that a necessary structure to have a unique semi-inner product inducing the norm and allowing for a Riesz representation theorem is that the space is a uniformly convex and uniformly Fréchet differentiable Banach space. For simplicity we will be calling such spaces uniform.

Definition 2.8 (Uniform Banach space)
We say a space V is uniform if it is a uniformly convex and uniformly Fréchet differentiable Banach space.
For the remainder of the paper we will only be working with uniform Banach spaces and throughout denote them by B.
Note that any Banach space that is uniformly convex or uniformly Fréchet differentiable is reflexive. Further a Banach space is uniformly Fréchet differentiable if and only if its dual space is uniformly convex. Thus for a uniform Banach space B its dual space B * is also uniform and its norm-inducing semi-inner product is given by We already know that the duality map is a homogeneous isometric isomorphism. Lastly we note that in fact it is also norm-to-norm continuous.The proof for this is standard and can be found in the appendix.
In particular this shows that in fact eq. (3) can be strengthened to for all x, y, z ∈ B and t ∈ C.
Thus the dual map is a homeomorphism from B to B * with the norm topologies.

Existence of Representer Theorems
The definitions and results of the previous section allow us to consider the regularised interpolation problem where the domain B of the interpolation problem is a real uniform Banach space. This generalises the setting considered by Argyriou, Micchelli and Pontil in [1] where the case of a Hilbert space domain is considered. In that setting the linear representer theorem states that there exists a solution to the interpolation problem which is in the linear span of the data points. Our work, similarly as [12], hints that in its essence the representer theorem is a result about the dual space rather than the space itself. Since in a Hilbert space the dual element is the element itself this doesn't become apparent in this setting and we obtain a result in the space itself. As the duality map is nonlinear for any Banach space which is not Hilbert we need to adjust the formulation of the representer theorem. Namely the linear representer theorem in a uniform Banach space states that there exists a solution such that its dual element is in the linear span of the dual elements of the data points. This is made precise in the following definition which is the analogue of Argyriou, Micchelli and Pontil calling regularisers which always admit a linear representer theorem admissible.

Definition 3.1 (Admissible Regulariser)
We say a function Ω ∶ B → R is admissible if for any m ∈ N and any given data {(x i , y i ) ∶ i ∈ N m } ⊂ B × Y such that the interpolation constraints can be satisfied the regularised interpolation problem eq. (4) admits a solution f 0 such that its dual element is of the form With this definition at hand it is now our goal to classify all admissible regularisers. It is well known that being a non-decreasing function of the norm on a Hilbert space is a sufficient condition for the regulariser to be admissible. By a Hahn-Banach argument similar as e.g. in Zhang, Zhang [19] this generalises to our case of uniform Banach spaces. Below we show that this condition is already almost necessary in the sense that admissible regularisers cannot be very far from being radially symmetric.

Theorem 3.2 A function Ω is admissible if and only if it is of the form
for some non-decreasing h whenever f ≠ r for r ∈ R. Here R is an at most countable set of radii where h has a jump discontinuity. For any f with f = r ∈ R the value Ω(f ) is only constrained by the monotonicity property, i.e. it has to lie in between lim t↗r h(t) and lim t↘r h(t).
In other words, Ω is radially non-decreasing and radially symmetric except for at most countably many circular jump discontinuities. In those discontinuities the function value is only limited by its monotonicity property.
In [1] Argyriou, Micchelli and Pontil show that any admissible regulariser on a Hilbert space is non-decreasing in orthogonal directions. An analogous result is true for uniform Banach spaces but with orthogonality not being symmetric and our intuition gained from the equivalence with James orthogonality we see that in fact it is tangential directions in which the regulariser is non-decreasing. This also becomes clear from the proves in [1], in particular when proving radial symmetry. Before we can prove the analogous result for uniform Banach spaces we need to show that we can extend this tangential bound considerably and a function that is non-decreasing in tangential directions is in fact non-decreasing in norm as is made precise in the following lemma.

Lemma 3.3
If such that [f T , f ] B = 0 then for any fixedf we have that Ω(f ) ≤ Ω(f ) for all f such that f < f .

Proof:
Part 1: (Bound Ω on the half space given by the tangent throughf ) We start by showing that Ω is radially non-decreasing. Since it is non-decreasing along tangential directions this immediately gives the claimed bound for the entire half space given by the tangent throughf . The idea of the proof is to move out along a tangent until we can move back along another tangent to hit a given point along the ray λ ⋅f as shown in fig. 1. Fix somef ∈ B and 1 < λ ∈ R and set f = λ⋅f . We need to show that Ω(f ) ≥ Ω(f ).
. Note that by strict convexity and continuity of the norm f t = f + t ⋅ f T is continuous and strictly increasing in t. Now since t ⋅ f T is the tangent throughf and g t points from f t to f , for small t for which f t < f we must have that f t + s ⋅ g t > f t for all s ∈ (0, 1) On the other hand for t big enough so that f t > f we thus must have and since the dual map is norm-to-norm continuous is clearly continuous in t. By above discussion the expression is positive for small t and negative for large t so by the intermediate value theorem there exists t 0 such that so that indeed [g t0 , f t0 ] B = 0 and thus g t0 is tangential to f t0 . But this means that Ω(f ) ≥ Ω(f t0 ) ≥ Ω(f ) as claimed.
Hence we have the bound along the entire ray λ ⋅f for 1 < λ ∈ R which extends along all tangents through those points to the half space given by the tangent throughf , i.e. the shaded region in fig. 1.

Part 2: (Extend the bound around the circle)
Next we note that we can actually extend the bound further to apply all the way around the circle, namely Ω(f ) ≥ Ω(f ) for all f such that f > f . This is done by considering f t =f + t ⋅ f T as before but then instead of following the tangent into the half space just considered we follow the tangent in the opposite direction around the circle, as shown in fig. 2a. We fix another point along that tangent and repeat the process, moving around the circle. We claim that by making the step size along each tangent small enough we can this way move around the circle while staying arbitrarily close to it. (b) When decreasing the step size along a tangent the step size away from the circle decreases significantly faster so that by making the steps along tangents small enough we can reach any point arbitrarily close to the circle.
More precisely we need to show that the distance a step along a tangent takes us away from the circle decreases faster than the step along the tangent so that we move considerably further around the circle than away from it with each step, as shown in fig. 2b.
As stated in eq. (2) let We thus easily see that This means that for a step of order δ along a tangent, i.e. f T of length δ, we take a step of order ρ B (δ) away from the circle. But since B is uniformly smooth we have that ρ B (δ) δ → 0 as δ → 0 proving that for small enough δ indeed the step away from the circle is significantly smaller than the step along the tangent as shown in fig. 2b. Combining both arguments this proves that we can reach any point with norm greater than f fromf only by moving along tangents giving the claimed bound.

K
Having proved this lemma we are now in the position to prove that indeed any admissible regulariser on a uniform Banach space is non-decreasing in tangential directions. Note that the previous lemma will also play a crucial role in removing the differentiability assumption when establishing the closed form representation of the regulariser in theorem 3.2.

Lemma 3.4
A function Ω is admissible if and only if for every f, if and only if for any fixedf and all f such that f < f we have Part 1: (Ω admissible ⇒ nondecreasing along tangential directions) Fix any f ∈ B and consider the regularised interpolation problem As Ω is assumed to be admissible there exists a solution with dual element in span{f * } which by homogeneity of the dual map clearly is so f + f T also satisfies the constraints and hence necessarily Ω(f + f T ) ≥ Ω(f ) as claimed. The second claim follows immediately from lemma 3.3.
Part 2: (Nondecreasing along tangential directions ⇒ Ω admissible) Conversely fix any data {(x i , y i ) ∶ i ∈ N m } ⊂ B × Y such that the interpolation constraints can be satisfied. Let f 0 be a solution to the regularised interpolation problem. If f * 0 ∈ span{x * i } we are done so assume it is not. We let Further denote by Z ⊂ B the space corresponding to the orthogonal complement of X * i.e.

Now by definition we have that
so the codimension of Z is m. Without loss of generality we can assume that not all y i are zero as otherwise f 0 = f * 0 = 0 is a trivial solution in the span of the data points. Since not all y i are zero f 0 ∈ Z and thus codim(span{f 0 , Z}) = m−1. But since X * = span{x * i } and the dual map is a homeomorphism X is homeomorphic to a linear space of dimension m. This means that that X ∩ span{f 0 , Z} is homeomorphic to a one-dimensional space and hence in particular contains a nonzero element. Now fix such 0 ≠ f ∈ X ∩ span{f 0 , Z}. As we noted earlier f being nonzero means that f ∈ span{f 0 } and f ∈ Z. Thus f = λf 0 + µg for λ, µ ≠ 0, g ∈ Z. By homogeneity of the dual map λ ⋅ X = X and so withg = µ λ g ∈ Z. This means we have constructed an f 0 = f 0 + f T with dual element in the span of the data points and f T ∈ Z which means by definition of Z that f 0 satisfies the interpolation constraints. It remains to show that in fact f 0 is in norm at most as large as f 0 . To this end note that for all f T ∈ Z by definition [x * , f * T ] B * = 0 for all x * ∈ X * and hence we see that for f 0 = f 0 + f T ∈ X we get that But by the equivalence with James orthogonality this means that But by lemma 3.3 we know that for a function which is non-decreasing along tangential directions is non-decreasing in norm so f 0 < f 0 implies that Ω(f 0 ) ≤ Ω(f 0 ) and so we have found a solution with dual element in the span of the data points as claimed.

K
Using those two results we can now give the proof that admissible regularisers are almost radially symmetric in the sense of theorem 3.2.
Proof (Of theorem 3.2): Part 1: (Ω continuous in radial direction implies Ω radially symmetric) We now show that instead of differentiability, the assumption that Ω is continuous in radial direction is sufficient to conclude that it has to be radially symmetric. We prove this by contradiction. Assume Ω is admissible but not radially symmetric. Then there exists a radius r so that Ω is not constant on the circle with radius r and hence there are two points f and g so that, without loss of generality, Ω(f ) > Ω(g). But then by lemma 3.3 for all 1 < λ ∈ R we have Ω(λg) ≥ Ω(f ) and thus as Ω non-negative and non-decreasing Ω(λg)−Ω(g) ≥ Ω(f )−Ω(g) > 0 contradicting radial continuity of Ω. Hence Ω has to be constant along every circle as claimed.
Part 2: (Radial mollification preserves being nondecreasing in tangential directions) The observation in part 1 is useful as we can easily radially mollify a given Ω so that the property of being non-decreasing along tangential directions is preserved. Indeed let ρ be a mollifier such that ρ ∶ R → [0, ∞) with support in [−1, 0] and for each ray given by some f 0 ∈ B of unit norm, define the mollified regulariser byΩ We thus obtain a radially mollified regulariser on B given bỹ We check that this function is still non-decreasing along tangential directions, i.e. we need to show that for f T s.t.
[f T , f ] B = 0 we still havẽ Note that by lemma 3.3 we have that Ω( As t is non-positive we can drop the modulus to obtain that this happens if f + f T ≥ f which is just James orthogonality and thus follows from the fact that [f T , f ] B = 0. This proves that the integral estimate eq. (8) indeed holds and hence the radially mollifiedΩ is indeed non-decreasing in tangential directions.

Part 3: (Ω is as claimed)
Putting these two observations together we obtain the result. By part 2Ω is of the formΩ(f ) = h ([f, f ] B ) for some continuous, non-decreasing h. But if we consider Ω along any two distinct, fixed directions given by then the mollifications of both h f1 and h f2 must equal h so h f1 = h f2 almost everywhere. Further by continuity of h they can only differ in points of discontinuity of h f1 and h f2 . As each h fi is a monotone function on the positive real line it can only have countably many points of discontinuity. Clearly as above bounds are only making statements about values outside a given circle and h is itself monotone, each h fi is free to attain any value within the monotonicity constraint in those points of discontinuity. This shows that Ω is of the claimed form.

Remark 3.5
We see that everything we say about Ω in this section relies crucially on the observation that it being admissible is a statement about its behaviour along tangents as stated in lemma 3.4. But there is in fact no tangent into the complex plane, i.e. for fixedf there is no tangent that intersects the ray {t ⋅ e iθ ⋅f ∶ t ∈ R} for any θ. Likewise it is not possible to reach any point along said ray via an "out and back" argument as in part 1 of the proof of lemma 3.3. For this reason it is currently not clear whether one can say anything about the situation in complex vector spaces.

The solution is determined by the space
First of all, while it has been known that for regularisers which are a strictly increasing function of the norm every solution is within the linear span of the data, the proofs in section 3 show immediately that something stronger can be said. For a regularised interpolation problem with an admissible regulariser to have a solution which is not in the linear span of the data the regulariser must have a flat region and the solution then has to lie within the flat region. But there is more to be said, in fact it turns out that for admissible regularisers the set of solutions in the linear span is independent of the regulariser. In [12] Micchelli and Pontil consider the minimal norm interpolation problem inf{ x X ∶ x ∈ X, L i (x) = y i ∀i ∈ N m } where X is a Banach space and L i are continuous linear functionals on X.
where the last inequality is strict because c i x * i peaks at f 0 and by strict convexity it peaks at a unique point. But this inequality shows that f 0 B < f 0 + g B for all g ∈ Z and thus as Ω is admissible also Ω(f 0 ) < Ω(f 0 + g) and f 0 is a solution of eq. (4).

K
This result shows that any admissible regulariser on a uniformly convex and uniformly smooth Banach space has a unique solution in the linear span of the data and the solution is the same for every admissible regulariser. This in particular means that it is the choice of the function space, and only the choice of the space, which determines the solution of the problem. We are thus free to work with whichever regulariser is most convenient in application. Computationally in many cases this is likely going to be 1 2 ⋅ 2 , for theoretical results other regularisers may be more suitable, such as in the afore mentioned paper [12] which heavily relies on a duality between the norm of the space and its continuous linear functionals.
A References