1 Introduction

What are non-commutative holomorphic functions in d variables? The question has been studied since the pioneering work of Taylor [21], but there is still no definitive answer. The class should certainly contain the non-commutative, also called free, polynomials, i.e. polynomials defined on d non-commuting variables. It should be some sort of generalization of the free polynomials, analogous to how holomorphic functions are generalizations of polynomials in commuting variables. Just as in the commutative case, the class will depend on the choice of domain. In this note, we shall consider domains that are sets of d-tuples of operators on a Hilbert space.

One approach is to study non-commutative convergent power series on domains in \({B(\mathscr {H})}^d\) (where \({B(\mathscr {H})}^d\) means d-tuples of bounded operators on some Hilbert space \(\mathscr {H}\)). This has been done systematically in Popescu’s monograph [20], following on earlier work such as [4, 5, 1719].

Working with non-commutative power series is natural and appealing, but does present some difficulties. One is that assuming a priori that the series converges uniformly is a strong assumption, and could be hard to verify if the function is presented in some other form. On every infinite dimensional Banach space there is an entire holomorphic function with finite radius of uniform convergence [6, p. 461]. A second difficulty is dealing with domains that are not the domains of convergence of power series.

Another approach to non-commutative functions is the theory of nc-functions. Let \(\mathscr {M}_n\) denote the n-by-n complex matrices, which we shall think of as operators on a finite dimensional Hilbert space, and let \({\mathbb M}^{[d]}= \bigcup _{n=1}^\infty \mathscr {M}_n^d\).

If \(x = (x^1, \ldots , x^d) \) and \(y = (y^1, \ldots , y^d)\) are d-tuples of operators on the spaces \(\mathscr {H}\) and \(\mathscr {K}\) respectively, we let \(x {\oplus } y\) denote the d-tuple \( (x^1 {\oplus } y^1, \ldots , x^d {\oplus } y^d)\) on \(\mathscr {H}{\oplus } \mathscr {K}\); and if \(s \in B(\mathscr {H},\mathscr {K})\) and \(t \in B(\mathscr {K},\mathscr {H})\) we let sx and xt denote respectively \((sx^1, \ldots , sx^d)\) and \((x^1 t, \ldots , x^d t)\).

Definition 1.1

A function f defined on some set \(D \subseteq {\mathbb M}^{[d]}\) is called graded if, for each n, f maps \(D \cap \mathscr {M}_n^d\) into \(\mathscr {M}_n\). We say f is an nc-function if it is graded and if, whenever \(x, y \in D\) and there exists a matrix s such that \( s x = y s\), then \(s f(x) = f(y) s\).

The theory of nc-functions has recently become a very active area of research, see e.g. [712, 15, 16]. Kaliuzhnyi-Verbovetskyi and Vinnikov have written a monograph [13] which develops the important ideas of the subject.

Nc-functions are a priori defined on matrices, not operators. Certain formulas that represent them (such as (17) below) can be naturally extended to operators. This raises the question of how one can intrinsically characterize functions on \({B(\mathscr {H})}^d\) that are in some sense extensions of nc-functions.

The purpose of this note is to show that on balanced domains in \({B(\mathscr {H})}^d\) there is an algebraic property—intertwining preserving—that together with an appropriate continuity is necessary and sufficient for a function to have a convergent power series, which in turn is equivalent to the function being approximable by free polynomials on finite sets. Moreover, it is a variation on the idea of an nc-function. On certain domains \(G_{\delta }^\#\) defined below, the properties of intertwining preserving and continuity are equivalent in turn to the function being the unique extension of a bounded nc-function.

Definition 1.2

Let \(\mathscr {H}\) be an infinite dimensional Hilbert space, let \(D \subseteq {B(\mathscr {H})}^d\), and let \(F:D \rightarrow {B(\mathscr {H})}\). We say that F is intertwining preserving (IP) if:

  1. (i)

    Whenever \(x,y \in D\) and there exists some bounded linear operator \(T \in {B(\mathscr {H})}\) such that \( T x = y T\), then \(T F(x) = F(y) T\).

  2. (ii)

    Whenever \(( x_ n )\) is a bounded sequence in D, and there exists some invertible bounded linear operator \( s :\mathscr {H}\rightarrow \bigoplus \mathscr {H}\) such that

    $$\begin{aligned} s^{-1} \begin{bmatrix} x_1&\quad 0&\quad \cdots \\ 0&\quad x_2&\quad \cdots \\ \vdots&\quad \vdots&\quad \ddots \end{bmatrix} s \in D, \end{aligned}$$

    then

    $$\begin{aligned} F\left( s^{-1} \begin{bmatrix} x_1&\quad 0&\quad \cdots \\ 0&\quad x_2&\quad \cdots \\ \vdots&\quad \vdots&\quad \ddots \end{bmatrix} s \right) = s^{-1} \begin{bmatrix} F(x_1)&\quad 0&\quad \cdots \\ 0&\quad F(x_2)&\quad \cdots \\ \vdots&\quad \vdots&\quad \ddots \end{bmatrix} s. \end{aligned}$$

Note that every free polynomial is IP, and therefore this condition must be inherited by any function that is a limit of free polynomials on finite sets. Nc-functions have the property that \(f(x {\oplus } y) = f(x) {\oplus } f(y)\), and we would like to exploit the analogous condition (ii) of IP functions. To do this, we would like our domains to be closed under direct sums. However, we can only do this by some identification of \(\mathscr {H}{\oplus } \mathscr {H}\) with \(\mathscr {H}\).

Definition 1.3

Let \(\mathscr {H}\) be an infinite dimensional Hilbert space. We say a set \(D \subseteq {B(\mathscr {H})}^d\) is closed with respect to countable direct sums if, for every bounded sequence \(x_1, x_2, \ldots \in D\), there is a unitary \(u:\mathscr {H}\rightarrow \mathscr {H}{\oplus } \mathscr {H}{\oplus } \cdots \) such that the d-tuple \(u^* (x_1 {\oplus } x_2 {\oplus } \cdots )u \in D\).

Two natural examples are the sets

$$\begin{aligned} {\begin{matrix} &{}\bigl \{x\in {B(\mathscr {H})}^d : \Vert x^1 \Vert ,\dots , \Vert x^d \Vert < 1\bigr \}, \\ &{}\bigl \{ x \in {B(\mathscr {H})}^d : x^1 (x^1)^* +\dots +x^d (x^d)^* < I \bigr \}. \end{matrix}} \end{aligned}$$
(1)

Definition 1.4

Let \(F:{B(\mathscr {H})}^d \rightarrow {B(\mathscr {H})}\). We say F is sequentially strong operator continuous (SSOC) if, whenever \(x_n \rightarrow x\) in the strong operator topology on \({B(\mathscr {H})}^d\), then \(F(x_n)\) tends to F(x) in the strong operator topology on \({B(\mathscr {H})}\).

Since multiplication is sequentially strong operator continuous, it follows that every free polynomial is SSOC, and this property is also inherited by limits on sets that are closed w.r.t. direct sums.

Here is our first main result. Recall that a subset B of a complex vector space is called balanced if whenever \(x \in B\) and \(\alpha \) is in the closed unit disk \(\overline{\mathbb {D}}\), then \(\alpha x \in B\).

Theorem 1.5

Let D be a balanced open set in \({B(\mathscr {H})}^d\) that is closed with respect to countable direct sums, and let \(F :D \rightarrow {B(\mathscr {H})}\). The following are equivalent:

  1. (i)

    The function F is intertwining preserving and sequentially strong operator continuous.

  2. (ii)

    There is a power series expansion

    $$\begin{aligned} \sum _{k=0}^\infty P_k(x) \end{aligned}$$
    (2)

    that converges absolutely at each point \(x \in D\) to F(x), where each \(P_k\) is a homogeneous free polynomial of degree k.

  3. (iii)

    The function F is uniformly approximable on finite subsets of D by free polynomials.

Let \(\delta \) be an \(I {\times } J\) matrix of free polynomials in d variables, where I and J are any positive integers. Then

$$\begin{aligned} G_{\delta }= \{ x \in {\mathbb M}^{[d]}: \Vert \delta (x) \Vert < 1 \} , \end{aligned}$$
(3)

where if x is a d-tuple of matrices acting on \(\mathbb {C}^n\), then we calculate the norm of \(\delta (x)\) as the operator norm from \((\mathbb {C}^n)^J\) to \((\mathbb {C}^n)^I\). Notice that \(G_{\delta _1} \cap G_{\delta _2} = G_{\delta _1 \oplus \delta _2}\), so those sets form a base for a topology; we call this the free topology on \({\mathbb M}^{[d]}\).

For the rest of this paper, we shall fix \(\mathscr {H}\) to be a separable infinite dimensional Hilbert space, and let \({B_1(\mathscr {H})}\) denote the unit ball in \({B(\mathscr {H})}\). Let \(\{e_1, e_2, \ldots \}\) be a fixed orthonormal basis of \(\mathscr {H}\), and let \(P_n\) denote orthogonal projection onto \(\bigvee \{ e_1, \ldots , e_n \}\).

There is an obvious extension of (3) to \({B(\mathscr {H})}^d\); we shall call this domain \(G_{\delta }^\#\).

$$\begin{aligned} G_{\delta }^\#= \{ x \in {B(\mathscr {H})}^d: \Vert \delta (x) \Vert < 1 \} . \end{aligned}$$
(4)

Both sets (1) are of the form (4) for an appropriate choice of \(\delta \). Note that every \(G_{\delta }^\#\) is closed with respect to countable direct sums.

By identifying \(\mathscr {M}_n\) with \(P_n {B(\mathscr {H})}P_n\), we can embed \({\mathbb M}^{[d]}\) in \({B(\mathscr {H})}^d\). If a function \(F :G_{\delta }^\#\rightarrow {B(\mathscr {H})}\) satisfies \( F (x) = P_n F(x)P_n\) whenever \(x = P_n x P_n\), then F naturally induces a graded function \(F^\flat \) on \(G_{\delta }\).

Here is a slightly simplified version of Theorem 5.4 (the assumption that 0 goes to 0 is unnecessary, but without it the statement is more complicated).

Theorem 1.6

Assume that \(G_{\delta }^\#\) is connected and contains 0. Then every bounded nc-function on \(G_{\delta }\) that maps 0 to 0 has a unique extension to an SSOC IP function on \(G_{\delta }^\#\). The extension has a series expansion in free polynomials that converges uniformly on \(G_{t \delta }^\#\) for each \(t>1\).

2 Intertwining preserving functions

The normal definition of an nc-function is a graded function f defined on a set \(D \subseteq {\mathbb M}^{[d]}\) such that D is closed with respect to direct sums, and such that f preserves direct sums and similarities, i.e. \(f(x {\oplus } y) = f(x) {\oplus } f(y)\) and if \( x = s^{-1} y s\) then \(f(x) = s^{-1} f(y) s\), whenever \(x,y \in D \cap \mathscr {M}_n^d\) and s is an invertible matrix in \(\mathscr {M}_n\). The fact that on such sets D this definition agrees with our earlier Definition 1.1 is proved in [13, Proposition 2.1].

There is a subtle difference between the nc-property and IP, because of the rôle of 0. For an nc-function, \(f(x {\oplus } 0) = f(x){\oplus } 0\), but for an IP function, we have \(f(x {\oplus } 0) = f(x) {\oplus } f(0)\). If \(f(0) = 0\), this presents no difficulty; but 0 need not lie in the domain of f, and even if it does, it need not be mapped to 0.

Consider, for an illustration, the case \(d=1\) and the function \(f(x) = x+1\). For each \(n \in \mathscr {N}\), let \(M_n\) be the n-by-n matrix that is 1 in the (1, 1) entry and 0 elsewhere. As an nc-function, we have

$$\begin{aligned} f:\, \begin{bmatrix} 1&\quad 0&\quad \cdots&\quad 0\\ 0&\quad 0&\quad \cdots&\quad 0 \\ \vdots&\quad \vdots&\quad \ddots&\quad \vdots \\ 0&\quad 0&\quad \cdots&\quad 0 \end{bmatrix} \mapsto \begin{bmatrix} 2&\quad 0&\quad \cdots&\quad 0 \\ 0&\quad 1&\quad \cdots&\quad 0 \\ \vdots&\quad \vdots&\quad \ddots&\quad \vdots \\ 0&\quad 0&\quad \cdots&\quad 1 \end{bmatrix}. \end{aligned}$$

But now, if we wish to extend f to an IP function on \({B(\mathscr {H})}\), what is the image of the diagonal operator T with first entry 1 and the rest 0? We want to identify T with \(M_n {\oplus } 0\), and map it to \(f(M_n) {\oplus } 0\) — but then each n gives a different image.

In order to interface with the theory of nc-functions, we shall assume that all our domains contain 0. To avoid the technical difficulty we just described, we shall compose our functions with Möbius maps to ensure that 0 is mapped to 0.

Lemma 2.1

If F is an IP function on \(D \subseteq {B(\mathscr {H})}^d\), and \(P \in {B(\mathscr {H})}\) is a projection, then, for all \(c \in D\) satisfying \(c = cP\) (or \(c = Pc)\) we have

$$\begin{aligned} a = PaP \quad \Longrightarrow \quad F(a) = P F(a) P + P^\perp F(c) P^\perp . \end{aligned}$$
(5)

Proof

As \(Pa = a P\), we get \(P F(a) = F(a) P\). As \(P^\perp a = 0 = c P^\perp \), we get \(P^\perp F(a) = F(c) P^\perp \). Combining these, we get (5). \(\square \)

We let \(\phi _\alpha \) denote the Möbius map on \(\mathbb {D}\) given by

$$\begin{aligned} \phi _\alpha (\zeta ) = \frac{\zeta - \alpha }{1 - \overline{\alpha } \zeta } . \end{aligned}$$

Lemma 2.2

Let \(D \subseteq {B(\mathscr {H})}^d\) contain 0, and assume F is an IP function from D to \({B_1(\mathscr {H})}\). Then

  1. (i)

    \(F(0) = \alpha I_\mathscr {H}\).

  2. (ii)

    The map \( H(x) = \phi _\alpha {\circ } F (x)\) is an IP function on D that maps 0 to 0.

  3. (iii)

    For any \(a \in D\) and any projection P we have

    $$\begin{aligned} a = PaP \quad \Longrightarrow \quad H(a) = P H(a) P . \end{aligned}$$
  4. (iv)

    \(F = \phi _{-\alpha } {\circ } H\).

Proof

(i)  By Lemma 2.1 applied to \(a = c = 0\), we get that F(0) commutes with every projection P in \({B(\mathscr {H})}\). Therefore it must be a scalar.

(ii)  For all z in \({B_1(\mathscr {H})}\), we have

$$\begin{aligned} \phi _\alpha (z) = - \alpha I_\mathscr {H}+ \big (1 - | \alpha |^2\big ) \sum _{n=1}^\infty \overline{\alpha }{}^{n-1} z^n , \end{aligned}$$
(6)

where the series converges uniformly and absolutely on every ball of radius less than one. By (i), we have \(H(0) = \phi _\alpha (\alpha I_\mathscr {H}) = 0\). If \(T x = yT\), then \(T F(x) = F(y)T\), and so \( T [ F(x)]^n = [F(y)]^n T\) for every n. Letting z be F(x) and F(y) in (6) and using the fact that the series converges uniformly, we conclude that \(T \phi _\alpha (F(x) ) = \phi _\alpha (F(y) ) T\), and hence H is IP.

(iii) follows from Lemma 2.1 with \( c = 0\).

(iv) follows from \(\phi _{-\alpha } {\circ } \phi _{\alpha } (z) = z\) for every \(z \in {B_1(\mathscr {H})}\). \(\square \)

By choosing a basis \(\{e_1, e_2, \ldots , \}\) for \(\mathscr {H}\), we can identify \(\mathscr {M}_n^d\) with \(P_n {B(\mathscr {H})}^dP_n\). Let us define

$$\begin{aligned} \mathscr {M}_n^d= P_n {B(\mathscr {H})}^dP_n,\qquad \mathscr {M}^{[d]}= \bigcup _{n=1}^\infty \mathscr {M}_n^d. \end{aligned}$$

Applying Lemma 2.2, we get the following.

Proposition 2.3

Let \(D \subseteq {B(\mathscr {H})}^d\) contain 0, and assume F is an IP function from D to \({B_1(\mathscr {H})}\). Let \(H = \phi _\alpha {\circ } F\), where \(\alpha \) is the scalar such that \(F(0) = \alpha I_\mathscr {H}\). Then \( H |_{D \cap \mathscr {M}^{[d]}}\) is an nc-function that is bounded by 1 in norm, and maps 0 in \(\mathscr {M}_n^d\) to the matrix 0 in \({\mathscr {M}}_n^1 = P_n {B(\mathscr {H})}P_n\).

If we let \(H^\flat \) denote \( H |_{D \cap \mathscr {M}^{[d]}}\), we can ask

Question 2.4

To what extent does \(H^\flat \) determine H?

Question 2.5

Does every bounded nc-function from \({D \cap \mathscr {M}^{[d]}}\) to \({\mathscr {M}}^1\) extend to a bounded IP function on D?

If

$$\begin{aligned} \delta \big (x^1,x^2\big ) = I - \big (x^1 x^2 - x^2 x^1\big ), \end{aligned}$$

then \(G_{\delta }^\#\) is non-empty, but \(G_{\delta }\) is empty, and the questions do not make much sense. But we do give answers to both questions in Theorem 5.4, in the special case that D is of the form \( G_{\delta }^\#\) and in addition is assumed to be balanced.

3 IP SSOC functions are analytic

Let us give a quick summary of what it means for a function to be holomorphic on a Banach space; we refer the reader to the book [6] by Dineen for a comprehensive treatment. Let D be an open subset of a Banach space X, and \(f:D \rightarrow Y\) a map into a Banach space Y. We say f has a Gâteaux derivative at x if

$$\begin{aligned} \lim _{\lambda \rightarrow 0} \frac{f(x + \lambda h) - f(x)}{\lambda } \overset{\mathrm{def}}{=} Df(x)[h] \end{aligned}$$

exists for all \(h \in X\). If f has a Gâteaux derivative at every point of D it is Gâteaux holomorphic [6, Lemma 3.3], i.e. holomorphic on each one dimensional slice. If, in addition, f is locally bounded on D, then it is actually Fréchet holomorphic [6, Proposition 3.7], which means that for each x there is a neighborhood G of 0 such that the Taylor series

$$\begin{aligned} f(x+h) = f(x) + \sum _{k=1}^\infty D^k f(x) [h,\ldots , h], \qquad h \in G, \end{aligned}$$
(7)

converges uniformly for all h in G. The kth derivative is a continuous linear map from \(X^k \rightarrow Y\), which is evaluated on the k-tuple \((h,h, \ldots , h)\).

The following lemma is the IP version of [8, Proposition 2.5] and [13, Proposition 2.2].

Lemma 3.1

Let D be an open set in \({B(\mathscr {H})}^d\) that is closed with respect to countable direct sums, and let \(F :D \rightarrow {B(\mathscr {H})}\) be intertwining preserving. Then F is bounded on bounded subsets of D, continuous and Gâteaux differentiable.

Proof

(Locally bounded)  Suppose there were \(x_n \in D\) such that \(\{ \Vert x_n \Vert \} \) is bounded, but \(\{ \Vert F(x_n) \Vert \}\) is unbounded. Since D is closed with respect to countable direct sums, there exists some unitary \(u :\mathscr {H}\rightarrow \mathscr {H}^\infty \) such that \(u^* ( \bigoplus x_n ) u \in D\). Since F is IP, by Definition 1.2, we have \( [\bigoplus F(x_n)]\) is bounded, which is a contradiction.

(Continuity)  Fix \(a \in D \) and let \(\varepsilon > 0\). By hypothesis, there exists a unitary \(u :\mathscr {H}\rightarrow \mathscr {H}^2\) such that

$$\begin{aligned} \alpha = \ u^* \begin{bmatrix} a&\quad 0 \\ 0&\quad a \end{bmatrix} u \in D. \end{aligned}$$
(8)

Choose \(\delta _1 > 0\) such that \(B( a , \delta _1) \subseteq D\), \(B( \alpha , \delta _1) \subseteq D\), and such that on \(B( \alpha , \delta _1)\) the function F is bounded by M. Choose \(\delta _2 > 0\) such that \(\delta _2 < \min ( \delta _1/2, \varepsilon \delta _1/2M)\). Note that for any \(a,b \in {B(\mathscr {H})}^d\) and any \(\lambda \in \mathbb {C}\), we have

$$\begin{aligned} u^* \begin{bmatrix} I&\quad - \lambda \\ 0&\quad I \end{bmatrix} \begin{bmatrix} b&\quad 0 \\ 0&\quad a \end{bmatrix} \begin{bmatrix} I&\quad \lambda \\ 0&\quad I \end{bmatrix} u = u^* \begin{bmatrix} b&\quad \lambda (b-a) \\ 0&\quad a \end{bmatrix} u. \end{aligned}$$
(9)

So by part (ii) of the definition of IP (Definition 1.2) we get that if \(\Vert b - a \Vert < \delta _2\), and letting \(\lambda = M/\varepsilon \), then

$$\begin{aligned} F\left( u^* \begin{bmatrix} I&\quad - M/\varepsilon \\ 0&\quad I \end{bmatrix} \begin{bmatrix} b&\quad 0 \\ 0&\quad a \end{bmatrix} \begin{bmatrix} I&\quad M/\varepsilon \\ 0&\quad I \end{bmatrix} u \right) = u^* \begin{bmatrix}F( b)&\;\;M[F (b)-F(a)]/\varepsilon \\ 0&\;\;F(a) \end{bmatrix} u \end{aligned}$$

is bounded by M. In particular, since the norm of the (1, 2)-entry of the last matrix is bounded by the norm of the whole matrix, we see that \(\Vert M(F(b)-F(a))/\varepsilon \Vert < M\), so \(\Vert F(b)-F(a) \Vert < \varepsilon \).

(Differentiability)  Let \(a \in D\) and \(h \in {B(\mathscr {H})}^d\). Let u be as in (8). Choose \(\varepsilon > 0\) such that, for all complex numbers t with \(|t| < \varepsilon \),

$$\begin{aligned} u^* \begin{bmatrix} a + th&\;\;\varepsilon h \\ 0&\;\;a \end{bmatrix} u \in D, \end{aligned}$$

and \(a + th \in D\). Let \(b = a +th\) and \(\lambda =\varepsilon /t\) in (9), and as before we conclude that

$$\begin{aligned} F\left( u^* \begin{bmatrix} a + th&\;\; \varepsilon h \\ 0&\;\;a \end{bmatrix} u \right) = u^* \begin{bmatrix} F(a + th)&\;\;\varepsilon (F(a+th) - F(a))/t \\ 0&\;\;F(a) \end{bmatrix} u . \end{aligned}$$
(10)

As F is continuous, when we take the limit as \(t \rightarrow 0\) in (10), we get

$$\begin{aligned} F\left( u^* \begin{bmatrix} a&\quad \varepsilon h \\ 0&\quad a \end{bmatrix} u \right) = u^* \begin{bmatrix} F(a )&\quad \varepsilon DF (a)[h] \\ 0&\quad F(a) \end{bmatrix} u . \end{aligned}$$

Therefore \(DF(a) [h]\) exists, so F is Gâteaux differentiable, as required.\(\square \)

When we replace X by a Banach algebra (in our present case, this is \({B(\mathscr {H})}^d\) with coordinate-wise multiplication), we would like something more than Fréchet holomorphic: we would like the kth term in (7) to be an actual free polynomial, homogeneous of degree k, in the entries of h.

The following result was proved by Kaliuzhnyi-Verbovetskyi and Vinnikov [13, Theorem 6.1] and by Klep and Špenko [14, Proposition 3.1].

Theorem 3.2

Let

$$\begin{aligned} g:{\mathbb M}^{[d]}\rightarrow \mathscr {M}^1, \qquad x\mapsto g(x) \end{aligned}$$

be an nc-function such that each matrix entry of g(x) is a polynomial of degree less than or equal to N in the entries of the matrices \(x^r\), \(1 \le r \le d\). Then g is a free polynomial of degree less than or equal to N.

We extend this result to multilinear SSOC IP maps. Each \(h_j\) will be a d-tuple of operators, \((h_j^1, \ldots , h_j^d)\).

Proposition 3.3

Let

$$\begin{aligned} L:{B(\mathscr {H})}^{dN} \rightarrow {B(\mathscr {H})}, \qquad (h_1, \ldots , h_N) \mapsto L(h_1,\ldots , h_N) \end{aligned}$$

be a continuous N-linear map from \(({B(\mathscr {H})}^{d})^N\) to \({B(\mathscr {H})}\) that is IP and SSOC. Then L is a homogeneous polynomial of degree N in the variables \(h_1^1, \ldots , h_N^d\).

Proof

By Proposition 2.3, if we restrict L to \({\mathscr {M}}^{dN}\), we get an nc-function. By Theorem 3.2, there is a free polynomial p of degree N that agrees with L on \({\mathscr {M}}^{dN}\). By homogeneity, p must be homogeneous of degree N. Define

$$\begin{aligned} {\Delta }(h) = L(h) - p(h) . \end{aligned}$$

Then \({\Delta }\) vanishes on \(({\mathscr {M}}^{d})^{N}\), and is SSOC. Since \(({\mathscr {M}}^{d})^{N}\) is strong operator topology dense in \(({B(\mathscr {H})}^{d})^N\), it follows that \({\Delta }\) is identically 0. \(\square \)

One of the achievements of Kaliuzhnyi-Verbovetskyi and Vinnikov in [13] is the Taylor–Taylor formula [13, Theorem 4.1]. This comes with a remainder term, which can be estimated. They show [13, Theorem 7.4] that with the assumption of local boundedness, this renders an nc-function analytic. The following theorem is an IP version of the latter result.

Theorem 3.4

Let D be an open neighborhood of 0 in \({B(\mathscr {H})}^d\), and let \(F :D \rightarrow {B(\mathscr {H})}\) be a function that is intertwining preserving and sequentially strong operator continuous. Then there is an open set \(U \subseteq D\) containing 0 and homogeneous free polynomials \(P_k\) of degree k such that

$$\begin{aligned} F(x) = F(0) + \sum _{k=1}^\infty P_k(x), \qquad x \in U, \end{aligned}$$

where the convergence is uniform for \(x \in U\).

Proof

Any open ball centered at 0 is closed with respect to countable direct sums, so we can assume without loss of generality that D is closed with respect to countable direct sums and bounded. By Lemma 3.1, F is bounded and Gâteaux differentiable on D, and so by [6, Proposition 3.7], F is automatically Fréchet holomorphic. Therefore, there is some open ball U centered at 0 such that

$$\begin{aligned} F(h) = F(0) + \sum _{k=1}^\infty D^k F(0)[h,\ldots , h], \qquad h \in U . \end{aligned}$$

We must show that each \(D^k F(0) [h,\ldots , h]\) is actually a free polynomial in h.

Claim 3.5

For each \(k \in \mathscr {N}\), the function

$$\begin{aligned} G^k:\big (h^0, \ldots , h^k\big ) \mapsto D^k F(h^0) \big [h^1,\ldots , h^k\big ] \end{aligned}$$
(11)

is an IP function on \( U {\times } ({B(\mathscr {H})}^d)^k \subseteq ({B(\mathscr {H})}^d)^{k+1}\).

Proof

Indeed, when \(k=1\), we have

$$\begin{aligned} D F\big (h^0\big ) \big [h^1\big ] = \lim _{t \rightarrow 0} \frac{1}{t} \,\Bigl [ F\big (h^0{ +} th^1\big ) - F\big (h^0\big )\Bigr ]. \end{aligned}$$
(12)

As F is IP, so is the right-hand side of (12). For \(k > 1\),

$$\begin{aligned} D^k F(h^0) [h^1\!,\dots , h^k] = \lim _{t \rightarrow 0}&\frac{1}{t} \,\bigl [D^{k-1} F(h^0{+}th^k) [h^1\!, \dots , h^{k-1}]\\&\qquad \qquad - D^{k-1}F(h^0) [h^1\!,\dots , h^{k-1}] \bigr ]. \end{aligned}$$

By induction, these are all IP. \(\blacksquare \)

Claim 3.6

For each \(k \in \mathscr {N}\), the function \(G^k\) from (11) is SSOC on \( U {\times } ({B(\mathscr {H})}^d)^k\).

Proof

Again we do this by induction on k. Let \(G^0 = F\), which is SSOC on \(U \subseteq D\) by hypothesis. Since \(G^{k-1}\) is IP on the set \(U^k\), it is locally bounded, and by Lemma 3.1 it is Gâteaux differentiable. Suppose

$$\begin{aligned} \mathrm{SOT} \lim _{n \rightarrow \infty } h^j_n = h^j, \qquad 0 \le j \le k , \end{aligned}$$

where each \(h^j_n\) and \( h^j \) is in U. Let h denote the \((k{+}1)\)-tuple \( (h^0, \ldots , h^{k})\) in \(U^{k+1}\), and let \({\widetilde{h}}\) denote the k-tuple \( (h^0, \ldots , h^{k-1})\); similarly, let \(h_n\) denote \( (h_n^0, \ldots , h_n^{k})\) and \({\widetilde{h}}_n\) denote \( (h_n^0, \ldots , h_n^{k-1})\). There exists some unitary u so that \( y = u^* \bigl (\widetilde{h} {\oplus } {\widetilde{h}}_1 {\oplus } {\widetilde{h}}_2 {\oplus } \cdots \bigr ) u\) is in \(U^{k}\). Since \(G^{k-1}\) is differentiable at y, and is IP, we have that the diagonal operator with entries

$$\begin{aligned}&\displaystyle \frac{1}{t}\, \bigl [ G^{k-1}(h^0 {+} t h^k\!, h^1\, \dots , h^{k-1}) - G^{k-1}(h^0\! , h^1\!, \dots , h^{k-1})\bigr ], \nonumber \\&\displaystyle \frac{1}{t}\,\bigl [G^{k-1}(h^0_1 {+} t h^k_1, h^1_1, \dots , h^{k-1}_1) - G^{k-1}(h^0_1 , h^1_1, \dots , h^{k-1}_1) \bigr ], \\&\displaystyle \cdots \nonumber \end{aligned}$$
(13)

has a limit as \(t \rightarrow 0\).

Let \(\varepsilon > 0\), and let \(v \in \mathscr {H}\) have \(\Vert v \Vert \le 1\). Choose t sufficiently close to 0 that each of the difference quotients in (13) is within \(\varepsilon /3\) of its limit (which is \(G^k\) evaluated at the appropriate h or \(h_n\)). Let n be large enough so that

$$\begin{aligned}&\bigl \Vert \bigl [ G^{k-1}(h^0 {+} t h^k\!, h^1\!, \dots , h^{k-1}) - G^{k-1}(h^0_n {+} t h^k_n, h^1_n, \dots , h^{k-1}_n) \bigr ] v \bigr \Vert \\&\qquad \quad + \bigl \Vert \bigl [ G^{k-1}(h^0\! , h^1\!, \dots , h^{k-1}) - G^{k-1}(h^0_n , h^1_n, \dots , h^{k-1}_n) \bigr ] v \bigr \Vert \le \frac{\varepsilon t}{3} . \end{aligned}$$

Then

$$\begin{aligned} \Bigl \Vert \Bigl [ G^k \big (h^0, \ldots , h^k\big ) - G^k \big (h^0_n, \ldots , h^k_n \big ) \Bigr ] v \Bigr \Vert \le \varepsilon . \end{aligned}$$

So each \(G^k\) is SSOC on \(U^{k+1}\). As \(G^k\) is linear in the last k variables, it is SSOC on \( U {\times } ({B(\mathscr {H})}^d)^k\) as claimed. \(\blacksquare \)

Therefore for each k, the map

$$\begin{aligned} \big (h^1, \ldots , h^k\big ) \mapsto D^k F(0) \big [h^1,\ldots , h^k\big ] \end{aligned}$$

is a linear IP function that is SSOC in a neighborhood of 0, so by Proposition 3.3 is a free polynomial. \(\square \)

4 Power series

Proof of Theorem 1.5 (i) \(\Rightarrow \) (ii).  As F is bounded on bounded subsets of D by Lemma 3.1, it is Fréchet holomorphic. By Theorem 3.4, the power series at 0 is actually of the form (2). We must show the series converges on all of D.

Fix \(x \in D\). Since D is open and balanced, there exists \(r > 1\) such that \(\lambda x \in D\) for every \(\lambda \in \mathbb {D}(0, r)\). As each \(P_k\) is homogeneous, we have that for \(\lambda \) in a neighborhood of 0,

$$\begin{aligned} F(\lambda x) = \sum _{k=0}^\infty P_k( \lambda x) = \sum _{k=0}^\infty \lambda ^k P_k(x) . \end{aligned}$$
(14)

Therefore, the function \(\psi :\lambda \mapsto F(\lambda x)\) is analytic on \(\mathbb {D}(0, r)\), with values in \({B(\mathscr {H})}\), and its power series expansion at 0 is given by (14). Let \(M = \sup \{ \Vert F(\lambda x) \Vert : | \lambda | < r \}\).

By the Cauchy integral formula, since \(\Vert F \Vert \) is bounded by M, we get that

$$\begin{aligned} \biggl \Vert \frac{d^k \psi }{d \lambda ^k} (0) \biggr \Vert \le M\, \frac{k!}{r^k} . \end{aligned}$$
(15)

Comparing (14) and (15), we conclude that

$$\begin{aligned} \Vert P_k (x) \Vert \le \frac{M}{r^k}, \end{aligned}$$

and so the power series in (14) converges uniformly and absolutely on the closed unit disk.

(ii) \(\Rightarrow \) (iii).  Obvious.

(iii) \(\Rightarrow \) (i).  (IP (i)).  Let \(x,y \in D\), and assume there exists \(T \in {B(\mathscr {H})}\) such that \(Tx = yT\). Let \(\varepsilon > 0\), and choose a free polynomial p such that \(\Vert p(x) - F(x) \Vert < \varepsilon \) and \(\Vert p(y) - F(y) \Vert < \varepsilon \). Then

$$\begin{aligned} \Vert T F(x) - F(y) T\Vert =\Vert T F(x) - T p(x) + p(y)T - F(y) T\Vert \le 2 \Vert T \Vert \varepsilon . \end{aligned}$$

As \(\varepsilon \) is arbitrary, we conclude that \(TF(x) = F(y)T\).

(IP (ii)).  Suppose \((x_n)\) is a bounded sequence in D, and assume it is infinite. (The argument for finite sequences is similar.) Let z be the diagonal d-tuple with entries \(x_1, x_2 , \ldots \), and let \(s :\mathscr {H}\rightarrow \mathscr {H}^\infty \) be such that \( y = s^{-1} z s\) is in D. For each fixed n, choose a sequence \(p_k\) of free polynomials that approximate F on \(\{ y, x_n \}\). Then

$$\begin{aligned} F \left( s^{-1} \begin{bmatrix} x_1&\quad 0&\quad \cdots \\ 0&\quad x_2&\quad \cdots \\ \vdots&\quad \vdots&\quad \ddots \end{bmatrix} s \right)&= \lim _{k \rightarrow \infty } p_k \left( s^{-1} \begin{bmatrix} x_1&\quad 0&\quad \cdots \\ 0&\quad x_2&\quad \cdots \\ \vdots&\quad \vdots&\quad \ddots \end{bmatrix} s \right) \\&= s^{-1} \lim _{k \rightarrow \infty } \begin{bmatrix}p_k ( x_1)&\quad 0&\quad \cdots \\ 0&\quad p_k ( x_2)&\quad \cdots \\ \vdots&\quad \vdots&\quad \ddots \end{bmatrix} s . \end{aligned}$$

The nth diagonal entry of the right hand side is \(F(x_n)\); so we conclude as n is arbitrary that

$$\begin{aligned} F \left( s^{-1} \begin{bmatrix} x_1&\quad 0&\quad \cdots \\ 0&\quad x_2&\quad \cdots \\ \vdots&\quad \vdots&\quad \ddots \end{bmatrix} s \right) = s^{-1} \begin{bmatrix} F ( x_1)&\quad 0&\quad \cdots \\ 0&\quad F( x_2)&\quad \cdots \\ \vdots&\quad \vdots&\quad \ddots \end{bmatrix} s . \end{aligned}$$

(SSOC).  Suppose \(x_n \) in D converges to x in D in the SOT. As before, by taking direct sums, we can approximate F by free polynomials uniformly on countable bounded subsets of D. So for any vector v, and any \(\varepsilon >0\), we choose a free polynomial p so that \(\Vert [ F(x_n) - p(x_n) ] v \Vert < \varepsilon /3\), and choose N so that for \(n \ge N\), we have \(\Vert [p(x) - p(x_n) ] v \Vert < \varepsilon /3\). Then \(\Vert [F(x) - F(x_n) ] v \Vert < \varepsilon \) for all \(n \ge N\). \(\square \)

In particular, we get the following consequence, which says that bounded IP functions leave closed algebras invariant.

In [3, Theorem 7.7] it is shown that for general nc-functions f, it need not be true that f(x) is in the algebra generated by x.

Corollary 4.1

Assume that D is balanced and closed with respect to countable direct sums, and that \(F:D \rightarrow {B(\mathscr {H})}\) is SSOC and IP. Then, for each \(x \in D\), the operator F(x) is in the closed unital algebra generated by \(x^1, \ldots , x^d\).

5 Free IP functions

Recall the definition of the sets \(G_{\delta }\) in (3); the topology they generate is called the free topology on \({\mathbb M}^{[d]}\).

Definition 5.1

A free holomorphic function on a free open set \(D \subseteq {\mathbb M}^{[d]}\) is an nc-function that, in the free topology, is locally bounded.

Free holomorphic functions are a class of nc-functions studied by the authors in [1, 2]. In particular, it was shown that there was a representation theorem for nc-functions that are bounded by 1 on \(G_{\delta }\).

Theorem 5.2

([1, Theorem 8.1]) Let \(\delta \) be an I-by-J matrix of free polynomials, and let f be an nc-function on \(G_{\delta }\) that is bounded by 1. There exists an auxiliary Hilbert space \(\mathscr {L}\) and an isometry

$$\begin{aligned} \begin{bmatrix}\alpha&\quad B\\C&\quad D\end{bmatrix} :\, \mathbb {C}{\oplus } \mathscr {L}^{I} \rightarrow \mathbb {C}{\oplus } \mathscr {L}^{J} \end{aligned}$$

so that for \(x \in G_{\delta }\cap B(\mathscr {K})^d\),

$$\begin{aligned} {\begin{matrix} f(x) = \alpha I_\mathscr {K}+ (I_\mathscr {K}{\otimes } B) &{}(\delta (x) {\otimes }I_\mathscr {L})\\ {} &{}\cdot \bigl [ I_\mathscr {K}{\otimes } I_{\mathscr {L}^J} - (I_\mathscr {K}{\otimes } D) (\delta (x) {\otimes }I_\mathscr {L}) \bigr ]^{-1} (I_\mathscr {K}{\otimes } C). \end{matrix}} \end{aligned}$$
(16)

Obviously, one can define a function on \(G_{\delta }^\#\) using the right-hand side of (16), replacing the finite dimensional space \(\mathscr {K}\) by the infinite dimensional space \(\mathscr {H}\). The following theorem gives sufficient conditions on a function to arise this way.

Theorem 5.3

Let \(\delta \) be an I-by-J matrix of free polynomials, and assume that \(G_{\delta }^\#\) is connected and contains 0. Let \(F :G_{\delta }^\#\rightarrow {B_1(\mathscr {H})}\) be sequentially strong operator continuous. Then the following are equivalent:

  1. (i)

    The function F is intertwining preserving.

  2. (ii)

    For each \(t > 1\), the function F is uniformly approximable by free polynomials on \(G_{t \delta }^\#\).

  3. (iii)

    There exists \(\alpha \in \mathbb {D}\) such that if \({\Phi } = \phi _\alpha {\circ } F\), then \({\Phi }^\flat \) is a free holomorphic function on \(G_{\delta }\) that is bounded by 1 in norm, and that maps 0 to 0.

  4. (iv)

    There exists an auxiliary Hilbert space \(\mathscr {L}\) and an isometry

    $$\begin{aligned} \begin{bmatrix}\alpha&\quad B\\C&\quad D\end{bmatrix} :\,\mathbb {C}{\oplus } \mathscr {L}^{I} \rightarrow \mathbb {C}{\oplus } \mathscr {L}^{J} \end{aligned}$$

    so that for \(x \in G_{\delta }^\#\),

    $$\begin{aligned} {\begin{matrix} F(x) = \alpha I_\mathscr {H}+ &{}(I_\mathscr {H}{\otimes } B) (\delta (x) {\otimes }I_\mathscr {L})\\ {} &{}\cdot \bigl [ I_\mathscr {H}{\otimes } I_{\mathscr {L}^J} - (I_\mathscr {H}{\otimes } D) (\delta (x) {\otimes }I_\mathscr {L}) \bigr ]^{-1} (I_\mathscr {H}{\otimes } C). \end{matrix}} \end{aligned}$$
    (17)

Proof

(i) \(\Rightarrow \) (iii).   This follows from Proposition 2.3.

(iii) \(\Rightarrow \) (iv).  By Theorem 5.2, we get such a representation for all \(x \in G_{\delta }\). The series on the right-hand side of (17) that one gets by expanding the Neumann series of

$$\begin{aligned} \bigl [ I_\mathscr {H}{\otimes } I_{\mathscr {L}^J} - (I_\mathscr {H}{\otimes } D) (\delta (x) {\otimes }I_\mathscr {L}) \bigr ]^{-1} \end{aligned}$$
(18)

converges absolutely on \(G_{\delta }^\#\); let us denote this limit by H(x). By Theorem 1.5, since H is a limit of free polynomials, it is IP and SSOC. Moreover, as \( \begin{bmatrix} \alpha&B\\C&D \end{bmatrix}\) is an isometry, we get by direct calculation that

$$\begin{aligned} I_\mathscr {H}- H^*(x) H(x) = (I_\mathscr {H}&{\otimes } C^*) \bigl [ I_\mathscr {H}{\otimes } I_{\mathscr {L}^J} - (\delta (x)^* {\otimes }I_\mathscr {L}) (I_\mathscr {H}{\otimes } D^*) \bigr ]^{-1} \\&\cdot \bigl [ I_\mathscr {H}{\otimes } I_{\mathscr {L}^J} - \delta (x)^* \delta (x) {\otimes }I_\mathscr {L}\bigr ]\\&\cdot \bigl [ I_\mathscr {H}{\otimes } I_{\mathscr {L}^J} - (I_\mathscr {H}{\otimes } D) (\delta (x) {\otimes }I_\mathscr {L}) \bigr ]^{-1} (I_\mathscr {H}{\otimes } C) \ge 0. \end{aligned}$$

Indeed, to see the last equality without being deluged by tensors, let us write

$$\begin{aligned} H(x) = \dot{\alpha }+ \dot{B} \dot{\delta } [ I - \dot{D} \dot{\delta }]^{-1} \dot{C}, \end{aligned}$$

where the dots denote appropriate tensors in (17). Then

$$\begin{aligned} I - H^* H&= I - \dot{\alpha }^* \dot{\alpha }- \dot{\alpha }^* \dot{B} \dot{\delta } [ I - \dot{D} \dot{\delta }]^{-1} \dot{C} - \dot{C}^* [ I - \dot{\delta }^* \dot{D}^* ]^{-1} \dot{\delta }^* \dot{B}^* \dot{\alpha }\\&\qquad \qquad \qquad - \dot{C}^* [ I - \dot{\delta }^* \dot{D}^* ]^{-1}\dot{\delta }^* \dot{B}^* \dot{B} \dot{\delta }[ I - \dot{D} \dot{\delta }]^{-1} \dot{C}\\&= \dot{C}^* \dot{C} + \dot{C}^* \dot{D} \dot{\delta } [ I - \dot{D} \dot{\delta }]^{-1} \dot{C} + \dot{C}^* [ I - \dot{\delta }^* \dot{D}^* ]^{-1} \dot{\delta }^* \dot{D}^* \dot{C} \\&\qquad \qquad \qquad - \dot{C}^* [ I - \dot{\delta }^* \dot{D}^*]^{-1} \dot{\delta }^* [ I - \dot{D}^* \dot{D} ] \dot{\delta } [ I - \dot{D} \dot{\delta }]^{-1} \dot{C} \\&= \dot{C}^* [ I - \dot{\delta }^* \dot{D}^*]^{-1} \Bigl \{ [ I - \dot{\delta }^* \dot{D}^*] [ I - \dot{D} \dot{\delta }] \\&\qquad \qquad \qquad \qquad \qquad \qquad + [ I - \dot{\delta }^* \dot{D}^*] \dot{D} \dot{\delta }+ \dot{\delta }^* \dot{D}^* [ I - \dot{D} \dot{\delta }] \\&\qquad \qquad \qquad \qquad \qquad \qquad - \dot{\delta }^* [ I - \dot{D}^* \dot{D} ] \dot{\delta } \Bigr \} [ I - \dot{D} \dot{\delta }]^{-1} \dot{C} \\&= \dot{C}^* [ I - \dot{\delta }^* \dot{D}^*]^{-1} [ I - \dot{\delta }^* \dot{\delta }] [ I - \dot{D} \dot{\delta }]^{-1} \dot{C} . \end{aligned}$$

Therefore, \(\Vert H(x) \Vert \le 1 \) for all \(x \in G_{\delta }^\#\). Let \({\Delta }(x) = H(x) - F(x)\). Then \({\Delta }\) is a bounded IP SSOC Fréchet holomorphic function on \(G_{\delta }^\#\) that vanishes on \(G_{\delta }^\#\cap \mathscr {M}^{[d]}= G_{\delta }\). There is a balanced neighborhood U of 0 in \(G_{\delta }^\#\) . By Theorem 1.5, \({\Delta }\) has a power series expansion \({\Delta }(x) = \sum P_k(x)\), and each \(P_k\) vanishes on \(U \cap {\mathbb M}^{[d]}\). This means each \(P_k\) vanishes on a neighborhood of zero in every \(\mathscr {M}_n^d\), and hence must be zero. Therefore, \({\Delta } \) is identically zero on U. By analytic continuation, \({\Delta }\) is identically zero on all of \(G_{\delta }^\#\), and therefore (17) holds.

(iv) \(\Rightarrow \) (ii).  This follows because the Neumann series obtained by expanding (18) has the kth term bounded by \(\Vert \delta (x) \Vert ^k\). Therefore, it converges uniformly and absolutely on \(G_{t \delta }^\#\) for every \(t > 1\).

(ii) \(\Rightarrow \) (i).  Repeat the argument of (iii) \(\Rightarrow \) (i) of Theorem 1.5. \(\square \)

In the notation of the theorem, let \(F^\flat = \phi _{-\alpha } {\circ } {\Phi }^\flat \). Then the proof of (iii) \(\Rightarrow \) (iv) shows that F and \(F^\flat \) determine each other uniquely. So we get

Theorem 5.4

Let \(\delta \) be an I-by-J matrix of free polynomials, and assume that \(G_{\delta }^\#\) is connected and contains 0. Then every bounded free holomorphic function on \(G_{\delta }\) has a unique extension to an IP SSOC function on \(G_{\delta }^\#\).