### Advanced Analysis, Notes 17: Hilbert function spaces (Pick’s interpolation theorem)

#### by Orr Shalit

In this final lecture we will give a proof of Pick’s interpolation theorem that is based on operator theory.

**Theorem 1 (Pick’s interpolation theorem):** *Let , and be given. There exists a function satisfying and *

*if and only if the following matrix inequality holds:*

Note that the matrix element appearing in the theorem is equal to , where is the reproducing kernel for the Hardy space (this kernel is called **the Szego kernel**). Given , the matrix

is called **the Pick matrix**, and it plays a central role in various interpolation problems on various spaces.

I learned this material from Agler and McCarthy’s monograph [AM], so the following is my adaptation of that source.

(A very interesting article by John McCarthy on Pick’s theorem can be found here).

#### 1. A necessary condition for interpolation

**Lemma 1: ***Let . Then if and only if . *

**Proof:** Let in . Then

**Proposition 2:** *Let be a RKHS and a set with kernel . A function is a multiplier of norm if and only if for every and every points the associated Pick matrix is positive semi-definite, meaning:*

**Proof:** Let us define the operator by extending linearly the rule

Suppose first that is a multiplier. Then by Proposition 3 in the previous post extends to and is equal to . Let and . Then

Since the span of s is dense in , the lemma implies that if and only if the Pick matrix is positive semi-definite. The other direction is similar.

**Exercise A:** Complete the above proof.

**Corollary 3:** *Let , and be given. A necessary condition for there to exist a function satisfying and *

*is that the following matrix inequality holds:*

**Proof:** This follows from Proposition 2 and from .

**Exercise B:** Find the correct version of the above three results when the condition of having norm less than or eqaul to is replaced by the condition of being bounded.

**Remark:** Note that it is important to put the bar on the right element. If is a norm one function then there must be such that the matrix

is not positive semi-definite (why?).

Proposition 2 gives one way of proving Pick’s theorem: one shows that values of can be chosen at all points of the disc so that the Pick matrix (on ) is positive semi-definite. This approach is perhaps the purest RKHS approach. We will use a different approach, which will actually give a formula for a solution. Both approaches are treated in [AM].

#### 2. The realization formula

A key to the approach we will see for the interpolation problem is the following characterization of functions in — the closed unit ball of .

**Theorem 4:** *Let . Then if and only if there is a Hilbert space and an isometry with block structure*

*such that *

*(*) *

* for all . *

**Proof:** For sufficiency, suppose that is an isometry as in the theorem. Then is also a contraction, thus is invertible for all and is an analytic operator valued function. Therefore defined by (*) is analytic in D. To see that we make a long computation:

where to get from the first line to the second one we used the fact that is an isometry:

which gives , and .

We leave the converse as an exercise for two reasons. First, we need the sufficiency part of the above theorem for our proof of Pick’s theorem. Second, our proof of Pick’s theorem contains the same idea that is used to proof the necessity part.

**Exercise C:** Complete the proof of the above theorem (suggestion: first give it a try on your own, and if you get stuck you can look at the proof of Pick’s theorem below).

#### 3. Proof of Pick’s theorem

We now complete the proof of Pick’s theorem. Corollary 3 takes care of one direction: positivity of the Pick matrix is a necessary condition for a norm one multiplier interpolant to exist. It remains to show that this condition is sufficient. For that, we require first a lemma, which interest of its own.

**Lemma 5:** *Let be a positive semi-definite kernel. Then there exists a Hilbert space and a function such that for all . If is a set with points, then the dimension of can be chosen to be the rank of when considered as a positive semi-definite matrix. *

**Proof:** Let be the RKHS associated with as in Theorem 1 in the previous post (in that theorem the space was denoted ). Define by . Then we have

This is not exactly what the theorem asserts; to obtain the assertion of the theorem apply what we just did to the kernel , which is also positive semi-definite.

To prove that theorem in the finite case, assume that and consider the positive semi-definite matrix . Let the rank of this matrix be . Then by the spectral theorem , where are the eigenvectors corresponding to non-zero eigenvalues. Denote for . Now define by for . Then we have that

as was to be proved.

The following theorem completes the proof of Pick’s theorem. In fact, we obtain a little bit more information then claimed in Theorem 1.

**Theorem 6:** *Let and be given. If the Pick matrix*

*is positive semi-definite and has rank , then there exists a rational function of degree at most such that and , for all .*

**Remark:** By **degree** of a rational function we mean either the degree of the numerator or the degree of the denominator — the bigger one — appearing in the rational function in reduced form. A consequence of the boundedness of is that the poles of lie away from the closed unit disc.

**Proof:*** *By Lemma 5 we know that there are such that for all . We can rewrite this identity in the following form:

This means that we define a linear map from into , by sending to and extending linearly, then this is an isometry. Since isometric subspaces have equal dimension, we can extend to an isometry .

Recall that the realization formula gives us a way to write down a function given an isometric matrix in block form. So let us write

where the decomposition is according to , and define . By Theorem 4, , and it is rational because there are rational formulas for the computation of the inverse of a matrix. The degree of is evidently not greater than . We have to show that interpolates that data.

Fix . From the definition of we have

The first row gives . The second row gives , so solving for we obtain (where the inverse is legal). Plugging this in the first row gives

thus interpolates the data, and the proof is complete.

**Exercise D:** Go over the proof and make sure you understand how it gives a formula for the solution.

**Remark:** The argument that we used to prove Pick’s theorem has a cute name: *the lurking isometry argument.* It generalizes to broader contexts, and can be used to prove the necessity part of Theorem 4.

#### 4. Brief on the commutant lifting approach

There is another approach for the interpolation problem that is “even more” operator theoretic. It is now known as the commutant lifting approach. The interested student can find a more detailed treatment in [AM].

We need the following definition. Let be a contraction on a Hilbert space . Suppose that is a Hilbert space which contains as a subspace and that is an isometry on . Denote by the orthogonal projection of onto . If

for all then we say that is an **isometric** **dilation** of , and that is a **compression** of .

An isometric dilation of is said to be **minimal** if the only subspace of which contains and is invariant for is itself.

**Remark:** Note that if is a minimal isometric dilation for then . Also, if then

which depends only on .

**Theorem 7 (Sz.-Nagy’s isometric dilation): ***Every contraction** on a Hilbert space has a minimal isometric dilation. The minimal isometric dilation is unique in the following sense: if are both minimal isometric dilations of on Hilbert spaces , then there exists a unitary such that *

*for all and such that . *

**Proof: **Define , and let . Define and

We can check that is indeed a minimal isometric dilation for (exercise). As for minimality, if are two minimal isometric dilations we define by

By the remark before the theorem we have that

so extends to a unitary between and . Again, we may check that and that for .

**Example:** Let be the multiplication operator on . Then is an isometry. If and , then is an invariant subspace for (why?) therefore is a compression of and is an isometric dilation for . is in fact the minimal isometric dilation for (exercise).

The key operator theoretic result we shall need is the following:

**Theorem 8 (Foias–Sz.-Nagy commutant lifting theorem):** *Let be an isometric dilation of . Suppose that satisfies . Then there exists such that which satisfies and . *

I will not supply a proof of this theorem here. We will also require the following fact:

**Proposition 9:** *Let be as in the example above and let satisfy . Then there exists such that . *

**Proof:** Let . For every polynomial , . From the density of polynomials in and the boundedness of it follows that is a multiplier and that .

**Alternative proof of Pick’s theorem:** Suppose that the Pick matrix is positive semi-definite. Define on to be the adjoint of . Positivity of the Pick matrix implies that is a contraction. Now commutes with (their adjoints are both diagonal with respect to the basis ). So by the commutant lifting theorem and the above proposition there exists some such that and . This solves the interpolation problem.

[…] The most familiar example of a Hilbert function space is the Hardy space (the space of all holomorphic functions in the unit disc with square summable Taylor coefficients), and its multiplier algebra is ; this multiplier algebra is simply the space of all bounded analytic functions on the disc (and the multiplier norm is the supremum norm), and therefore it exists as a respectable function algebra outside theory of Hilbert function spaces. (The Pick problem for the space becomes a question of interpolating with a bounded analytic function, and this problem was considered directly by Pick without any mention of Hilbert spaces. For a Hilbert function space theoretic proof, see this post.) […]