Advanced Analysis, Notes 16: Hilbert function spaces (basics)
by Orr Shalit
In the final week of the semester we will study Hilbert function spaces (also known as reproducing kernel Hilbert spaces) with the goal of presenting an operator theoretic proof of the classical Pick interpolation theorem. Since time is limited I will present a somewhat unorthodox route, and ignore much of the beautiful function theory involved. BGU students who wish to learn more about this should consider taking Daniel Alpay’s course next semester. Let me also note the helpful lecture notes available from Vern Paulsen’s webpage and also this monograph by Jim Agler and John McCarthy (in this post and the next one I will refer to these as [P] and [AM] below).
(Not directly related to this post, but might be of some interest to students: there is an amusing discussion connected to earlier material in the course (convergence of Fourier series) here).
1. Hilbert function spaces
A Hilbert function space on a set is a Hilbert space which is a subspace of the vector space of all functions , with one additional and crucial property:
For every the functional of point evalution at , given by , is a bounded functional on .
This crucial property connects between the function theoretic properties of as a space of functions, and the Hilbert space structure of and therefore also the operator theory on . Hilbert function spaces are also known as reproducing kernel Hilbert spaces (RKHS), for reasons that will become clear very soon.
When we say that is a subspace of , we mean two things: 1) is a subset of ; and 2) inherits its linear space structure from . In particular, the zero function is the zero element of , and two elements of which are equal as functions are equal as elements of .
Example: Perhaps the most interesting examples of Hilbert function spaces are those that occur when is an open subset of , and now we’ll meet one of the most beloved examples. Let D denote the open unit disc in the complex plane . Recall that every function analytic in D has a power series representation . We define the Hardy space to be the space of analytic functions on the disc with square summable Taylor coefficients, that is
This space is endowed with the inner product
Before worrying about function theoretic issues, note that the space of formal power series with square summable coefficients is obviously isomorphic (as an inner product space) to when endowed with the inner product (*). But if a formal power series has square summable coefficients then its coefficients are bounded, so it is convergent in the open unit disc. Thus is a space of analytic functions which as an inner product space is isomorphic to , and is therefore complete.
To show that is a Hilbert function space on D, we need to show that for every , the functional is bounded. Well, if this functional is bounded, then by the Riesz representation theorem there exists some such that . Take another look at (*), recall that , and putting these two things together it is not hard to come up with the guess that . Since , we have that , so . Plugging in (*) we see that , so that this functional is indeed bounded.
There are a great many other Hilbert function spaces of rather different function theoretic nature. Some of them are mentioned and studied in [P] and in [AM].
Example: The Hilbert space is a space of functions from to , and point evaluation is clearly bounded. Thus is a Hilbert function space on set . I have never heard of anything useful that came out of this point of view.
Exercise A.1: The Bergman space is the space of all analytic functions in the disc which are square integrable on the disc:
(where denotes the space of functions analytic in D). Show directly (that is, not by guessing the kernel functions) that point evaluation is bounded.
Exercise A.2: The Segal-Bargmann space is the space of all entire functions for which . Show that this space is a Hilbert function space.
2. Reproducing kernels
Let be a Hilbert function space on a set . As point evaluation is a bounded functional, there is, for every , an element such that for all . The function is called the kernel function at . We define a function on by
The function is called the reproducing kernel or simply the kernel of . The terminology comes from the fact that the family of functions enable one to reproduce the values of any via the relationship . Note that .
Example: The kernel function for is
Exercise B: What is the kernel function of ? What about the Bergman space ? What about the Segal-Bargmann space?
The kernel has the following important property. If and , then
It follows that for every choice of points , the matrix is positive semi-definite. A function with this property is called a positive definite kernel (sometimes it is called a positive semi-definite kernel, but let us lighten terminology). Every Hilbert function space gives rise to a positive definite kernel — its reproducing kernel. In fact, there is a bijective correspondence between positive definite kernels and Hilbert function spaces.
Theorem 1: Let be a positive definite kernel on . Then there exists a unique Hilbert function space on such that is the reproducing kernel of .
Remark: Let me be a little bit more precise regarding the uniqueness statement, because in the context of Hilbert space one might mistakingly understand that this uniqueness is up to some kind of Hilbert space isomorphism. The existence statement is that there exists a subset of the space of all functions on , which is a Hilbert function space, and has as a reproducing kernel. The uniqueness assertion is that this set is uniquely determined.
Proof: Denote . Let be the linear subspace of spanned by the functions . Equip with the sesquilinear form
From the definition and the fact that is a positive definite kernel, the form satisfies properties 2–4 from the definition of inner product (see Definition 1 in this post). That is, it is almost an inner product, the only property missing is . As mentioned before (see Theorem 2 in that same old post) this means that the Cauchy-Schwartz inequality holds for this form on . From this it follows that the space is a subspace, and also that for all and . This implies that the quotient space can be equipped with the inner product
As usual, we are using the notation to denote the equivalence class of in . That this is well defined and an inner product follows from what we said about . Now complete to obtain a Hilbert space .
We started with a space of functions on , but by taking the quotient and completing we are now no longer dealing with functions on . To fix this, we define a map by
This is a linear map and it is injective because the set spans . Note that
We may push the inner product from to , and we obtain a Hilbert space of functions in . Point evaluation is bounded by definition (see (**)), and the kernel function at is . By (***), this . Let us now identify between and . It follows that is spanned by the kernel functions . The kernel function of this Hilbert function space is given by
Now suppose that is another Hilbert function space on that has as a reproducing kernel. By definition, the kernel functions are contained in , and since is spanned by the kernel functions, . However, if is in the orthogonal complement of in then for all , thus . This shows that , completing the proof.
Remark: Sometimes Hilbert function spaces are referred to as reproducing kernel Hilbert spaces (RKHS for short), this terminology being justified by the preceding discussion.
Exercise C: Let be a RKHS with kernel . Let . Show that the matrix is strictly positive definite (meaning that it is positive semi-definite and invertible) iff the functions are linearly indendent, iff the evaluation functionals at the points are linearly independent as functionals on .
Exercise D: Show how the proof of the above theorem is simplified when is assumed strictly positive definite (meaning that the matrix is strictly positive definite for any choice of points). Show how at every stage of the process one has a space of functions on .
Exercise E: Rewrite the above proof of the theorem (for not necessarily strictly positive) so that it involves no quotient-taking and no abstract completions.
3. as a “subspace” of
We now continue to study the space , and give a useful formula for the norm of elements in this space.
Let . For any denote . The power series of is given by
This series converges to uniformly on the closed unit disc. In fact, extends to an analytic function on a neighborhood of the closed unit disc, and the series converges uniformly and absolutely on a closed disc that is slightly larger than the unit disc. It follows that the following computations are valid:
It would be nice to say that for every
and this is almost true, but we have to be careful because elements in are not defined on . However,
and this tends to as by the dominated convergence theorem. In particular
We will use this formula below. Much more is true in fact: the functions converge almost everywhere to a function , and . Moreover, can be identified via the map with the subspace of consisting of all functions with Fourier series that are supported on the non-negative integers: if , then . For details, see [AM]. What we will need is a little weaker fact:
Fact: A function analytic on D is in if and only if
Proof: We already showed that functions have this property. Now let be analytic in the disc, and suppose it has power series representation . As before
By the monotone convergence theorem the right hand side converges to , so the limit of integrals is finite if and only if .
4. Multiplier algebras
Let be a Hilbert function space on , to be fixed in the following discussion. Now we define a class of natural operators on .
The multiplier algebra of , denoted , is defined as
In this definition simply denotes the usual pointwise product of two functions: . For every we define an operator by
It is not unusual to identify and , and we will do so too, but sometimes it is helpful to distinguish between the function and the linear operator .
Proposition 2: For all , we have .
Proof: Obviously is a well defined linear operator. To show that it is bounded we use the closed graph theorem. Let , and suppose that . Since point evaluation is continuous we have for all
thus for all , meaning that , so the graph of is closed.
Thus, is a subalgebra of . We norm by pulling back the norm from (this means ), and this turns into a normed algebra. We will see below that it is a Banach algebra.
Important assumption: In fact, there is a situation when is not a norm, but only a semi-norm. This doesn’t arise when for every there exists a function such that . Thus we will assume this henceforth. This assumption is holds in most (but not all) cases of interest, and in particular in the case of . It is equivalent to the assumption that for all .
One of the instances when it is helpful to distinguish between a multiplier and the multiplication operator that it induces is when discussing adjoints.
Proposition 3: Let . For all , the kernel function is an eigenvector for with eigenvalue . Conversely, if is an operator such that for all , then there is an such that for all and .
Proof: We let and compute for all
but . Thus .
For the converse, suppose for all . Define . For all , we have too. We now compute
It follows that , so . It also follows that .
Corollary 4: contains only bounded functions, and .
Proof: This follows from the Propositions 2 and 3, since the eigenvalues of are contained in , which is contained in the disc of radius .
Corollary 5: is closed in the weak operator topology of , and, in particular, is a Banach algebra.
Remark: The weak operator topology is the topology on in which a net if and only for all , . The reader may safely replace this with the norm topology of for the purposes of these lectures.
Proof: If is a net in converging to in the weak operator topology, then WOT. Then for all , converges weakly to , so is an eigenvalue for . By Proposition 3, .
Example: Let’s find out who is the multiplier algebra of . Since , we have that , and in particular every multiplier in is an analytic function on D. By Corollary 4 we have that every multiplier is bounded on the disc. Thus
The right hand side denotes the algebra of bounded analytic functions in D. On the other hand, by the formula developed in Section 3, we have for and ,
By the fact at the end of Section 3, we obtain and , so and .
Exercise F: What is the the multiplier algebra of ? Of ? Can you cook up a non-trivial RKHS that has a trivial (scalars only) multiplier algebra?
5. First magic trick
Note that as a consequence of the previous example and Corollary 5 we have that is a Banach algebra. It follows readily that the limit of any locally uniformly convergent sequence of analytic functions (on any open set) is also analytic. I leave it to the reader to check just how much complex analysis we used to obtain this result, and to compare with the other proof they know of this fact.