To ease my life when typing this stuff up, I will denote by the open unit disc in . Our goal is to prove the following theorem.

**Theorem 1 (Pick’s interpolation theorem):** *Let , and be given. There exists a function satisfying and *

*if and only if the following matrix inequality holds:*

(*)

To make this problem into a problem in operator theory, we will need to introduce the Hardy space and a little bit of the theory of reproducing kernel Hilbert spaces. Fortunately, I’ve written some notes on this topic a few years ago.

See this linked post for an introduction to the theory of reproducing kernel Hilbert spaces. You need to read that (if you don’t know this stuff), in order to continue reading.

In today’s post, I will prove Pick’s theorem using the commutant lifting theorem. In a previous post, I proved it using the “lurking isometry” argument, another Hilbert space approach, which has the advantage that it also gives rise to the “realization formula” for the interpolating function (in that old post I also said a few words about the commutant lifting approach). The commutant lifting approach has the advantage that it is very beautiful, elegant, and generalizes to a plethora of situations.

In the linked post that I asked you to read, we met the Hilbert function space

and we saw that its multiplier algebra is – the algebra of all bounded analytic functions on the unit disc with the supremum norm. We saw that is spanned by the collection of its kernel functions , where

is the * Szego kernel . *Note that the Szego kernel appears in the statement of Pick’s theorem above. In fact, the “Pick matrix” appearing in (*) can be seen to be the matrix .

We haven’t yet computed a non-trivial example of an isometric dilation of an operator. Let us do this now.

The most important operator on is the shift

given by (or ). It is easy to see that is unitarily equivalent to the unilateral shift of multiplicity one.

An important fact about the Hilbert function space is (Proposition 3 in the linked post) that

(**)

for every and every . In particular, if we define , then is invariant under the adjoint of every multiplier, and in particular it is co-invariant under . If we write for the compression (so ), then is a diagonalizable operator, diagonal with respect to the **non**-orthonormal basis , with corresponding eigenvalues .

**Proposition 2: ***With the notation as above, let . Then is a contraction, and its minimal isometric dilation is equal to . *

**Remark: **You might be thinking: this is obnoxious! We took an isometry, compressed it, and now we compute its minimal dilation – which is what we started with!! Patience, my friend.

**Proof: **As we remarked above, it is clear that is an isometric co-extension of . Let us show that

(#) .

, and so

.

Since the right hand side is invariant under , we find that it contains every polynomial. Since the polynomials are dense in , we have (#), and the proof is complete.

Our proof strategy (for Theorem 1) will be to consider the Hilbert space of . On this space we can define the operator and determined by

.

Since the kernel functions are linearly independent, the above equation well-defines a linear operator on , and hence this determines an operator .

**Proof of Theorem 1: **We start with the easy direction (which holds in every RKHS). Assume that there exists multiplier such that and for ; we need to prove that condition (*) holds. In the situation considered, is the co-restriction of to . Indeed, by the very definition of , and the “important fact” recalled in the previous section, . Thus, , and so for all . Writing for some , and expanding, we obtain

,

and since this has to hold for all choices of , this is just the condition (*).

Now, for the converse, assume that the condition (*) holds, that is, the Pick matrix is positive semi-definite. We need to show that existence of a multiplier such that and for .

Now let . Since and are both diagonal with respect to the same basis, they commute. We know that is a contraction, and that its minimal isometric dilation is . Now, the assumption (*) is precisely that (whence ) is a contraction. Indeed, this follows from the computation that we carried out two paragraphs above, in the first part of the proof.

We know, by the commutant lifting theorem (Theorem 3 in the previous post) that there exists such that and .

In the next section we will prove that , in the sense that if , then there exists such that . Believing this for the moment, we have . But then is the required interpolant: since , while , we find that for all .

This completes the proof of Pick’s theorem, modulo the computation of the commutant of .

**Question: **Note that the second part used features of the space , and it does not hold in arbitrary RKHS. What property of the RKHS was used to make the argument work?

**Theorem 3: **.

**Proof: **Clearly (the first algebra is commutative, and ). So let . We need to show that there exists a such that . Since , we define . Then is an analytic function on . If we show that , then it will follow that and is a multiplier, and the proof will be complete.

For every , its Taylor series converges in norm, thus

,

and the series converges in norm. If we apply point evaluation at , we get the convergent numerical series . It follows that is a multiplier, , and .

]]>

We will not follow that route. Rather, we will see what dilation theory can help us to understand regarding tuples of commuting operators (which is also treated to some extent in the book). Surprisingly, this will lead to a truly nifty application in function theory.

**Definition:** Let be commuting contractions. A commuting tuple of operators is said to be a * commuting isometric/unitary extension/dilation* of if all s are isometric/unitary and they are all either (i) extensions of the respective s (in the case of extension); or (ii) satisfy

(in the case of dilation).

**Theorem 1: ***Every tuple of commuting isometries has a unitary extension. *

**Proof: **Let be commuting isometries acting on . Let be the unitary extension of (that we constructed in the previous post), acting on a Hilbert space . We did not delve on this matter, but the is in fact a minimal unitary dilation, in the sense that

.

Now we shall define operators on with the following properties:

- are a commuting family,
- are all isometries, and,
- For , if was already an isometry, then so is .

Once we construct the above family, the proof would almost be complete: If are all unitaries, then we’ll be done. If, say, is not a unitary, then we repeat the above construction, first constructing the minimal unitary extension of , and then extending so that form commuting isometries, such that whenever is a unitary, is also a unitary. In this way, we have a commuting family extending such that at least are unitaries. Continuing this way, the proof will be complete.

It remains to define the operators , and to show that they have the desired properties. There is really no freedom in the definition, since we must have , and so (keeping in mind that we require on ) we must define

(*) ,

keeping in mind that elements of the form span . To see that this map preserves inner product (and hence well-defines an isometry), we take and , and check

.

Thus, for every , the definition (*) extends to a well defined isometry on , which clearly extends (by considering the case). Moreover, if is a unitary, then the range of is dense in , so, being isometric with dense range, is a unitary.

Finally,

,

and since elements of the form are total in , it follows that . This completes the proof.

**Exercise A: **Show that the unitary dilation constructed in the above proof is minimal.

**Exercise B: **Prove that a unitary dilation of an isometry (or isometries) is an extension.

If are any contractions, then we know from Exercise C in the previous post that there are isometric dilations that simultaneously dilate any noncommutative monomial. However, if the are assumed to be commuting, there is no reason that the isometries that you constructed in your solution to Exercise C will also commute (recall the construction and try to understand where this fails). In fact, we will soon see that in general, when , a -tuple of commuting contractions does not have an isometric (nor a unitary) dilation.

The case is special.

**Theorem 2 (Ando’s isometric dilation theorem): ***Every pair of commuting contractions has an isometric dilation (in fact, an isometric co-extension). *

**Proof: **The proof is “not deep”, and just boils down to finding a sufficently clever construction. Let be two commuting contractions. We define the isometric dilation as follows. We begin by defining , and

We begin by defining isometric dilation and by

,

where . Each is an isometric dilation – in fact, a co-extension – of , but they do not necessarily commute:

while

.

For to be equal to we need that . There is really no reason for equality to hold here, the fact that and commute does not help at all (think of a simple example where equality fails).

However, we have the following (writing for ):

and by commutativity and symmetry, this clearly equals . We can therefore find a unitary such that

for all (this unitary is guaranteed to exist thanks to the zeros that we stuffed in the definition; think about it). Letting , we now define and . It is now easy to check that is still a co-extension of and also that and commute. This concludes the proof.

**Corollary (Ando’s unitary dilation theorem): ***Every pair of commuting contractions has a unitary dilation. *

**Corollary (Ando’s inequality):** * for every polynomial and every pair of commuting contractions . *

**Proof:** For the proof, one proceeds as usual, with the difference that now one needs the spectral theorem for commuting normals, rather than the spectral theorem for single operators. Alternatively, one can use the theory of commutative C*-algebras.

**Exercise C: **Define the notion of “minimal dilation” for isometric and unitary dilations of commuting tuples. Prove that if a tuple has a dilation, then it has a minimal dilation. Prove that the minimal isometric/unitary dilation of a pair of contractions is **not unique**.

**Theorem 3 (commutant lifting theorem): ***Let be a contraction, and let be the minimal isometric dilation of (which, we know, is a co-extension). For every commuting with , there exists that commutes with , is a co-extension of , and satisfies . *

**Proof: **WLOG, . Let be an isometric co-extension of , given by Ando’s theorem. Then is an isometric co-extension of , so by Exercise C in the previous post, and , where and is the minimal isometric co-extension of . We can therefore write, with respect to the decomposition ,

and

.

We claim that this does the job. It is a co-extension of because is, and so , whence . All that remains to prove is that commutes with . One can see this simply by multiplying the above matrices and looking at the entry. Alternatively, being a direct sum, we have . So

.

The proof is done.

The following example is due to Kaijser and Varopoulos.

**Example: **Let be given by

Let .

Using your either your brain or your favourite computer algebra software (or both), you should check the following facts:

- for all ,
- for all ,
- ,
- and finally:

and so . It follows that do not have a unitary dilation (otherwise, they would satisfy a von Neumann inequality), and therefore they also cannot have an isometric dilation.

This raises the question: let be a tuple of commuting contractions, and suppose that they satisfy a von Neumann type inequality:

.

Does it follow that must have a unitary dilation? In other words, is the failure of a von Neumann type inequality the only obstruction to the existence of dilations? We leave this question for now. In the next lecture, we will apply the dilation theory that we have developed thus far to function theory.

]]>- The objects and theorems here motivate (and have motivated historically) the development of the general theory, and help understand it better and appreciate it more.
- We will reach very quickly a nontrivial application of operator theory to function theory, which is quite different from what you all saw in your studies, probably.
- I am stalling, so that the students who need to fill in some prerequisites (like commutative C*-algebras and the spectral theorem, will have time to do so).
- I love this stuff!

Okay, enough explaining, let us begin.

**Definition: **An operator is said to be

if ,**selfadjoint****normal****unitary****isometric****an****isometry**) if (equivalently, if for all ,**coisometric****a****coisometry**) if (i.e., if is an isometry),**contractive****a****contraction**) if .

Normal (and hence, selfadjoint and unitary) operators are well understood. In this lecture our goal will be to understand first isometries (and hence also coisometries) and then contractions a little bit better. Before we begin with this, I will now repeat some material from the first lecture in my von Neumann algebras notes to remind ourselves of the nice structure theorem for normal operators.

The spectral theorem is the basic structure theorem for normal operators. It tells us how a general normal operator looks like. Recall that if is a normal operator acting on a finite dimensional space , then is unitarily equivalent to a diagonal operator, that is, there exists a unitary operator such that

,

where (some points in are possibly repeated).

Moreover, if is a compact normal operator on a Hilbert space, then unitarily equivalent to a diagonal operator (an infinite diagonal matrix, acting by multiplication on ), the diagonal of which corresponds to the eigenvalues of , which form a sequence converging to :

.

If is unitarily equivalent to a diagonal operator where the diagonal elements form a bounded sequence of numbers (not necessarily converging to ), then is a bounded normal operator (which is not necessarily compact). However, a general bounded normal operator need not be unitarily equivalent to a diagonal operator.

**Example: **The operator given by is a selfadjoint bounded operator, and it is an easy exercise to see that this operator has no eigenvalues (so it cannot be unitarily equivalent to a diagonal operator). However, the operator in this example is rather well understood, and it is “sort of” diagonal. The general case is not significantly more complicated than this.

To understand general normal operators, one needs to recall the notions of measure space and of spaces. Let be a measure space and consider the Hilbert space . Every defines a (normal) bounded operator on .

**Exercise A: **In case you never have, prove the following facts (or look them up; Kadison-Ringrose have a nice treatment relevant to our setting). Let be a -finite measure space and .

- (where is the
of , which is defined to be ).**essential supremum** - .
- If and defines a bounded operator on , then is essentially bounded: .
- If , then and .
- is selfadjoint if and only if is real valued almost everywhere.

The algebra is an abstract C*-algebra with the usual algebraic operations, the -operation , and norm . The map

is a -representation (i.e., and algebraic homomorphism that preserves the adjoint ), which is isometric (), so omitting we can think of as a C*-subalgebra of . Since , the operator is always normal. It is selfadjoint if and only is a.e. real valued, and it is unitary if and only if a.e., which happens if and only if it is isometric. The operator , where , is called a * multiplication operator. *Multiplication operators form a rich collection of examples of normal operators. The spectral theorem says that this collection exhausts all selfadjoint operators: every normal operator is unitarily equivalent to a multiplication operator.

**Theorem 1 (the spectral theorem):** *Let be a normal operator on a Hilbert space . Then is unitarily equivalent to a multiplication operator, that is, there exists a measure space , a unitary operator , and a complex valued , such that*

*. *

*When is separable, can be taken to be a locally compact Hausdorff space, and a regular Borel probability measure. *

The spectral theorem allows, in principle, to “solve all problems about normal operators”. Okay, that’s maybe an exaggeration; the spectral theorem reduces any problem about normal operators to problems about multiplication operators. As an example, we prove

**Proposition 2 (von Neumann’s inequality for normal operators): ***Let be a normal contraction. Then for any polynomial , *

.

**Proof:** Since unitary equivalence preserves everything, we may assume that . Now, is a contraction so a.e. So we find

,

and since a.e., we have a.e., so

,

as required. (The second equality simply follows from the maximum modulus principle, which, by the way, can be proved using the methods we will use in this lecture).

Unitary operators are isometric, that’s one basic example. An isometry that is not a unitary is called a ** proper isometry**.

**Example (the unilateral shift): **Let be a Hilbert space, suppose . Let be the Hilbert space consisting of square summable sequences with values in :

.

has an inner product

.

On we define **the****unilateral shift*** *(of multiplicity ) to be given by

.

A simple computation reveals and so

.

**Definition:** Let be an isometry. A subspace is called a **wandering subspace*** *for if

.

(Equivalently for all ). We define

(*) .

We have that , and this means that .

**Example: **Let be a subspace. Fix , and let be given by

and for all .

Then is a wandering subspace for . In particular,

is a wandering subspace.

**Definition: **An isometry is said to be ** a unilateral shift** (of multiplicity ) if it has a wandering subspace (with ) for such that .

Note that is determined uniquely by (*) as

(**) .

Alright: we have defined THE unilateral shift and the notion of A unilateral shift, what gives?

**Exercise A: **Let be a unilateral shift. Show that

- is unitarily equivalent to THE unilateral shift on , where is given by (**).
- A unilateral shift is unitarily equivalent to if and only if .
- (in other words, ).

**Definition: **Let . A subspace is said to be

if ,**invariant**if (equivalent, if is invariant for ).**coinvariant**if it is invariant for both and (equivalently, if and are both invariant for ).**reducing**

Sometimes, to clarify the operator with respect to which the subspace should satisfy the properties, the terminology **invariant*** for * (etc.) is used.

**Theorem 3 (Wold decomposition):** *Let be an isometry. Then there exists a decomposition of to two reducing subspaces such that is unitary on and is a unilateral shift. Moreover, this decomposition is determined uniquely by and , where . *

**Proof: **If we define , then is wandering for because for all we have by definition .

Let us define , and . Then is clearly invariant, and is a unilateral shift.

We will soon show that . Assuming that for the moment, we have that , so is invariant, and thus are reducing. Moreover, is a surjective isometry, so it is a unitary (showing that a surjective isometry is a unitary is a basic exercise).

To show that we take and note that if and only if for every . But

where we cancelled out all the terms in the above “telescopic sum”. Now, for all is the same as for all , and we have established that .

We leave the uniqueness as an exercise.

**Definition: **Let , and suppose that is a Hilbert space containing . An operator is said to be an ** extension **of if .

**Theorem 4 (unitary extension of an isometry):** *Every isometry has a unitary extension. *

**Proof: **Let be an isometry and let be its Wold decomposition on . By Exercise A, up to unitary equivalence, , the unilateral shift of some multiplicity on . Letting be the bilateral shift on , that is, is defined by

.

Clearly, is an extension of , and so is an extension of .

**Corollary: ***Von Neumann’s inequality holds for isometries. *

**Proof:** Let be an isometry and let be a unitary extension. From follows and so .

In the previous section we showed that every isometry has a unitary extension, and this immediately led to the application that isometries satisfy von Neumann’s inequality. Of course, we cannot hope to show that any operator other than an isometry has a unitary extension, so we need another trick.

**Definition:** Let , and suppose that is a Hilbert space containing . An operator is said to be an ** dilation **of if for all .

Sometimes, the term * dilation* refers to the relation , and then the above notion is called a

**Examples: **If is an extension of , then is a dilation of . Note that in this case

.

Likewise, if is a **co-extension**** **of (meaning that is a dilation of ) then it is a dilation, and in this case we have the picture:

,

or (simply changing basis)

.

Both of the above situations are a special case of the following picture

(#)

in which is easily seen to be a dilation of :

.

It turns out that (#) is the most general form of a dilation.

**Proposition 5 (Sarason’s Lemma):** *Let be a Hilbert spaces, and suppose that is a (power) dilation of . Then there exist two subspaces invariant for such that . *

**Remark: **The case where corresponds to the case where is invariant and is an extension of ; the case where correspond to the case where is co-invariant and is a co-extension of .

**Remark:** In the situation of the above proposition, we say that is a * semi-invariant subspace *for . More generally, if is an operator algebra, then a subspace is said to be

**Proof of Sarason’s Lemma: (maybe I should leave this as an exercise)**

Let us write as in the above remark, and let (this notation means the closed linear subspace spanned by where and ). Then is clearly invariant, and . For this to work out we really have no choice but to define . The only thing that remains to do is to prove that is invariant for .

If then the definition of semi-invariant means that , and so for every we have that . By taking sums and limits, we find that for all .

To show that is invariant, let , , and consider . Since , to show that it is in we need to show that it is orthogonal to ; equivalently, we need to show that . But by the previous paragraph , because . This completes the proof.

**Theorem 6 (Sz.-Nagy’s isometric dilation theorem):** *Let be a contraction on a Hilbert space . Then there exists a Hilbert space and an isometry such that *

*is a co-extension of .**is the smallest subspace of invariant for that contains .*

*Moreover, the pair is unique, in the sense that if is another such pair satisfying the above requirements, then there exists a unitary such that for all , and such that . *

**Remark:** The isometry (or more precisely, the pair ) is known as * the minimal isometric dilation of *. The theorem can be reformulated (and sometimes is) by saying that every contraction has a minimal coisometric extension.

**Proof: **Define the * defect operator* . Note that is equivalent to

.

On we define an operator by

or, in matrix form

(all the empty slots are understood to be zero).

We leave it as an exercise for the reader to complete the proof.

**Exercise B:** Prove that is an isometry, a co-extension of , and that it satisfies the minimality requirement. While you are at it, please also show that is unique.

The uniqueness of the minimal dilation can be strengthened as follows.

**Exercise C: **Prove that if is an isometric co-extension of , the there exist two reducing subspaces , such that and , and such that is unitary equivalent to the minimal isometric dilation of . Thus, breaks up as that , where is essentially irrelevant.

Note that if we didn’t insist on creating a minimal isometric dilation, we could have taken

and to be given by the same formula:

**Exercise D: **If are contractions, then there exists a Hilbert space and isometries on such that

for all and all .

**Theorem 7 (Sz.-Nagy’s unitary dilation theorem):** *Let be a contraction on a Hilbert space . Then there exists a Hilbert space and a unitary such that *

*is a dilation of .**is the smallest subspace of reducing for that contains .*

*Moreover, the pair is unique, in the sense that if is another such pair satisfying the above requirements, then there exists a unitary such that for all , and such that . *

**Remark: **The unitary (or more precisely, the pair ) is called * the minimal unitary dilation of *.

**Proof: **Let be the minimal isometric dilation of , and let be the unitary extension of constructed in Theorem 4. Then is a dilation of (“my dilation’s dilation is a dilation of mine”). If is not minimal by some chance, then restrict to (the notation should be self-explanatory), and then is a minimal unitary dilation. Uniqueness is left as an exercise.

**Corollary (von Neumann’s inequality): ***Let be a contraction on a Hilbert space. For every polynomial , *

.

**Proof: **

.

]]>

will always denote a Hilbert space over . will always denote the algebra of bounded operators on . I am interested in operators on Hilbert space; various subspaces and algebras of operators that come with various structures, as well as the relationship between these subspaces and structures; and connections and applications of the above to other areas, in particular complex function theory and matrix theory.

I expect students to know the spectral theorem for normal operators on Hilbert space (see here. A proof in the selfadjoint case that assumes very little from the reader can be found in my notes, see Section 3 and 4). I also will assume some familiarity with Banach algebras and commutative C*-algebras – the student should contact me for references.

We begin by surveying different kinds of structures of interest.

is a Banach space when equipped with the norm

.

**Definition 1:** An ** operator space** is a linear subspace .

We always consider an operator space as a normed space with the norm induced from .

**Remarks: **

- We usually assume without mention that our operator spaces are closed. For most parts of the theory it doesn’t make a difference.
- What we defined above is sometimes called a “concrete operator space”. We will encounter “abstract operator spaces” later on.

We can already say something quite interesting about the collection of operator spaces: *every normed (Banach) space is isometric to a (closed) operator space (so long as we allow to be big enough). *

(Quick proof: , where is the closed unit ball of with the weak-* topology).

By * isometric *we really mean

For example are easily seen to be isometric to and , respectively.

**Exercise A: **What can you say about embedding in for finite dimensional?

The above fact is in extreme contrast with the situation for Hilbert spaces: every closed subspace of a Hilbert space is a Hilbert space.

is also an algebra, with product

.

In fact, , so is a Banach algebra.

**Note:** I expect students in this course to be familiar with Banach algebras. Those who are not should ask me for references (and I will provide time for catching up).

**Definition 2:** An * operator algebra* is a subalgebra .

By subalgebra we just mean . We always take operator algebras with the induced algebraic structure and norm.

**Remark: **The two remarks made after Definition 1 hold for operator algebras as well.

**Examples:** , the upper triangular matrices, the operator algebra generated by an operator .

Interesting fact: *not every Banach algebra is an operator algebra*. By this we mean that not every Banach algebra is isometrically isomorphic to an operator algebra. In the context of Banach algebras, and * isometric isomorphism * is understood to be an isometric isomorphism in the sense of normed spaces, which also preserves the product .

How can we see this? We shall prove later in this course *von Neumann’s inequality*, which says that

for every polynomial and every contraction on a Hilbert space (* contraction* means ). Conversely, if is a Banach space such that von Neumann’s inequality holds for every contraction, then must be a Hilbert space (a result of Foias ). So if is not a Hilbert space, there exists such that is not an operator algebra.

**Exercise B:** Find explicitly such an operator for which vN inequality fails.

So all Banach spaces are operator spaces, while not all Banach algebras are operator algebras. Structure matters. The more structure, the more “rigidity”.

**Definition 3: **An ** involution** on an algebra is a map such that

for all and . An algebra with involution is said to be a ** *-algebra**.

**Definition 4:** A *Banach *-algebra** *is a *-algebra which is also a Banach algebra such that for all .

**Definition 5: **A ** C*-algebra** is a Banach *-algebra such that

holds for all ; this identity is called * the C* identity*.

**Examples:** has a natural involution, the adjoint . So is a *-algebra, and so is every “*-subalgebra”. The norm and adjoint in satisfy the C* identity **(prove this if you never** **did!**), so is a C*-algebra, as are all its closed *-subalgebras.

**Definition 6: **A **concrete C*-algebra*** *is a closed *-subalgebra of .

Clearly concrete C*-algebras are C*-algebras, with norm and algebraic structure inherited from . Note that we do require C*-algebras to be closed. The maps in the setting of C*-algebras are *-homomorphisms and *-isomorphims.

**Theorem (Gelfand-Naimark): ***Every C*-algebra is (isometrically) *-isomorphic to a concrete C*-algebra. *

The word “isometrically” is in parenthesis, because it will turn out that every *-isomorphism is isometric. This is part of the beautiful rigidity properties that C*-algebras enjoy: the algebraic structure captures the metric structure.

As hinted above, there exist notions of “abstract operator space/algebras”, and there are theorems that these are isomorphic in the appropriate sense to concrete operator spaces/algebras. What could the abstract definition be based on?

**Question**: If abstract objects end up being concrete, why not just work in the concrete setting?

**Answer:** The abstract setting is more flexible, and allows for more constructions (but indeed sometimes it is indeed more convenient to work in the concrete setting). For example: quotient C*-algebras: easy to establish the axioms, not trivial to represent.

Recall that is said to be ** positive** (or

for all . This is equivalent to the two following conditions holding (together)

- ( is
), and*selfadjoint* - .

(Note that the above two conditions make sense in any Banach *-algebra, and so we have selfadjoint and positive elements in such algebras as well. Also the following definition of order makes sense there.)

We then say that if . This gives a partial order on the selfadjoint elements.

**Definition 7: **An * operator system* is an operator space such that

- , and
- .

**Examples:** Concrete C*-algebras are operator systems, as are spaces of the form where is some operator space.

**Non-example:** The selfadjoint elements are not an operator system, they form a real subspace but not a subspace.

However, operator systems have “sufficiently many” selfadjoint and positive elements.

**Facts:**

- , so an operator system is spanned by its selfadjoints.
- If is selfadjoint then so is actually spanned by its positive elements.

**Notation:** .

A linear map is ** positive** if .

For , we write (direct sum times), with inner product

for , .

We have an identification . Thus has natural linear and algebraic structure, norm, * operation and order.

**Key (and simple) fact:** If is a concrete ZZZ in , then is a concrete ZZZ in , where

ZZZ op. space, op. alg., op. system, C*-subalgebra .

Let be operator spaces, and a linear map. We define by

.

**Definition 8:**

- If then is said to be
(and is referred to as the**completely bounded**).**CB norm** - If then is
**completely contractive***.* - If is isometric for all then is
**completely isometric.** - If is positive for all then is
.*completely positive*

We will see that there is a big difference between being bounded/positive to being completely bounded/completely positive.

]]>

The official name of the course is “Topics in Operator Theory” but the true title is “Operator Spaces, Operator Algebras and Related Topics”. There are two somewhat competing goals driving this course: the first goal is to give students a taste of the beautiful subjects of operator spaces and operator algebras, broadening their view of functional analysis, and giving those who wish enough tools to delve into the literature in this subject. The second goal is to train students to understand the problems in which I am interested and to get acquainted with the methods of the theory so that they will be able to carry out research in my group. The choice of topics will therefore be somewhat eclectic. In fact, I have several different plans for this course, and I am keeping things vague on purpose so that I am free to change course as the wind blows (and as I see who the students are, what their background is and where their interests lie).

What else? The course will be given in English. There is no official web page for the course – I might open exercises online on this blog. The grade will be based on some exercises that I will give throughout the semester, and a final “big homework” project.

]]>

is UCP .

A tuple is said to be * minimal* if there is no proper reducing subspace such that . It is said to be

In an earlier paper (“Dilations, inclusions of matrix convex sets, and completely positive maps”) I wrote with other co-authors, we claimed that if two compact tuples and are minimal and have the same matrix range, then is unitarily equivalent to ; see Section 6 there (the printed version corresponds to version 2 of the paper on arxiv). This is false, as subsequent examples by Ben Passer showed (see this paper). A couple of other statements in that section are also incorrect, most obviously the claim that every compact tuple can be compressed to a minimal compact tuple with the same matrix range. All the problems with Section 6 of that earlier paper “Dilations,…” can be quickly fixed by throwing in a “non-singularity” assumption, and we posted a corrected version on the arxiv. (The results of Section 6 there do not affect the rest of the results in the paper, and are somewhat not in the direction of the main parts of that paper).

In the current paper, Ben and I take a closer look at the non-singularity assumption that was introduced in the corrected version of “Dilations,…”, and we give a complete characterization of non-singular tuples of compacts. This characterization involves the various kinds of extreme points of the matrix range . We also make a serious invetigation into fully compressed tuples defined above. We find that a matrix tuple is fully compressed if and only if it is non-singular and minimal. Consequently, we get a clean statement of the classification theorem for compacts: if two tuples and of compacts are fully compressed, then they are unitarily equivalent if and only if .

]]>

I had the privilege to work with two very bright students who have recently finished their undergraduate studies: Mattya Ben-Efraim (from Bar-Ilan University) and Yuval Yifrach (from the Technion). It is remarkable the amount of stuff they learned for this one week project (the basics of C*-algebras and operator spaces), and that they actually helped settle the question that I raised to them.

I learned a lot of things in this project. First, I learned that my conjecture was false! I also learned and re-learned some programming abilities, and I learned something about the subtleties and limitations of numerical experimentation (I also learned something about how to supervise an undergraduate research project, but that’s besides the point right now).

Following old advice of Halmos, the problem that I posed to Mattya and Yuval was in the form a *yes/no* question. To state this question, we need to recall some definitions. If is an matrix, another matrix is said to be a **dilation*** *of if

In this case, is said to be a **compression*** *of . We then write . If and are tuple of matrices, we say that is a dilation of , and that is a compression of , if for all . We then write .

A -tuple is said to be **normal**** **if is normal and for all . Normal tuples of matrices (or operators) are the best understood ones, because – thanks to the spectral theorem – they are simultaneously unitarily diagonalizable.

If is an matrix, we define its * norm * to be the operator norm of when considered as an operator on , that is: (here denotes the Euclidean norm ).

**The complex matrix cube problem:** *Given a tuple of matrices, can one find a normal dilation of such that for all ? *

I had some reasons to believe that the answer is *yes*, one of which was that it was proved that the answer is *yes* if are all selfadjoint; see this paper by Passer, Solel, and myself (I reported on this paper in this previous post). Passer later proved that if we replace with then the answer is *yes* for arbitrary tuples. Passer’s proof did not look optimal to me. Also, I carried out some primitive numerical experimentation that seemed to verify that is plausible.

Suppose we are given a -tuple of contractions . We wish to know whether it is true or false that has a normal dilation such that for all (this is not exactly the way we formulated the problem above, but it can be seen to be equivalent).

The first observation is that it is enough to consider only tuples of unitaries. Indeed, if is a contraction (meaning that ) then

is a unitary dilation of . So given a -tuple of contractions, we can find a -tuple of unitaries such that . Thus, we may as well assume that is a tuple of unitaries, and ask whether we can dilate .

We considered normal tuples with joint eigenvalues at the vertices of the polytope , where is a regular polygon with vertices that circumscribes the unit disc . When is moderately large, the boundary of is very close to , and in this post I will ignore this difference (the reader can check that for the results we get, ignoring this difference actually puts us on the safe side of things).

Given a fixed tuple of unitaries , it can be shown that has a normal dilation with for all if and only if

(*)

for every matrix valued polynomial of degree one , where is the fixed normal tuple we constructed above. Let me emphasize: here is some normal dilation that we don’t know whether it exists or not, and is the fixed tuple with joint eigenvalues at the vertices of the polytope from above. We recall that a matrix valued polynomial is evaluated on a tuple of matrices as follows:

.

So the first method of attack was the following: we randomly sampled a unitary tuple , and then we tried to find a polynomial such that (*) was violated, with . We thought of several ways to look for such a polynomial given , one of which was naively trying to iterate over a mesh of all possible coefficients . As you can easily see this method is so inefficient that even for moderately small and the search could take us a lifetime. Another idea was to try to run some numerical optimization such as gradient descent on the function but since this function is not convex this was also quite futile. And all this just for a given tuple , which might happen to have a dilation.

The second general approach was still to randomly select a tuple of unitaries and to check whether it has a normal dilation, but this time the test was somewhat more indirect. Basically, modulo some equivalences within the theory, we know that has the required dilation of size at most , if and only if there exists a UCP map sending to for , where is the tuple of normals constructed above. This, modulo some more equivalences (and as been noted in this paper of Helton, Klep and McCullough) is equivalent to the existence of positive semidefinite matrices such that

for

where , for , and

.

The existence of such semidefinite matrices can be interpreted as the feasibility of a certain semidefinite program (SDP). In fact, we decided to treat the full semidefinite program as follows

minimize

such that

,

,

.

Note that we moved to the right hand side, to make the equality constraint afiine in the variables and . Recall that and are all fixed. In the implementation we actually defined this as a minimization problem

maximize

such that

,

,

.

Now, there exists available software in Matlab that let’s one solve the above SDP quite reliably, and we used the high level interface CVX which invoked either one of the solvers SDPT3 and SeDuMi (we used both solvers and played with precision parameters to increase our confidence that the results we got were correct). This approach had the great advantage that (besides being much faster), it could tell us what is the smallest such that had a normal dilation such that .

We ran the tests for small values of and . You can see some histograms in the presentation (the value plotted in the histograms in the presentation is , in order to have a direct comparison with the conjecture). Interestingly, we see that with very high probability, the required value of is on average significantly lower than . For and , we found a few random counter examples, but they required that was just 2% over .

Once we know that the average value of is less than , it heuristically becomes reasonable that counter examples are hard to come by, because of concentration of measure phenomena: roughly speaking, the probability of a Lipschitz function on the unitaries (say) to be away from the mean goes down exponentially like with the dimension. For the same reason, once we found a counter example , it is very hard to find coefficients of a matrix valued polynomial such that . And indeed, we did not yet verify by an independent method that our counter examples are indeed counter examples.

The counter examples we found are very unlikely to be caused by numerical error, since we tested the result with a couple of solvers and also the advertised precision of the solvers is several orders of magnitude less than 2%.

After we found the random counter examples, it occurred to us that there was no reason to sample the unitaries independently. We recalled that in the selfadjoint case, tightness of the constant was established using anti-commuting unitaries. Indeed, since counter examples are rare, one would think that the matrices would have to conspire somehow in order to mess up the inequality. So we searched for things that are anti-commuting-like. And it did indeed turn out that the commuting matrices

where are also a counter example (in the case ). We also still haven’t found a polynomial for which . We will probably continue looking for one when the holiday is over, and then I will update.

Here are Mattya and Yuval’s slides which they presented in the talk they gave at the end of the week. I also plan to put the code and files with raw results online on my homepage at some point.

The main method for checking what is the “inflation constant” required for a dilation, using a semidefinite program, is based on basic operator space theory, and in particular draws upon the algorithm described in this paper of Helton, Klep and McCullough.

We used Matlab. The numerical heavy lifting was done by others. We solved the semidefinite program using CVX – a high level Matlab software package for specifying and solving convex programs. We also used YALMIP – another high level Matlab software package for specifying and solving convex programs – to verify the results we obtained with CVX. Both CVX and YALMIP invoked SDP solvers SDPT3 and SeDuMi.

This project came after several years of collaboration with colleagues, and in particular, I had many conversations on the subject with Ben Passer before and during the projects week.

I owe many thanks to the organizers of this projects week, Ram Band, Tali Pinsky, and Ron Rosenthal. Thanks to this opportunity I explored an avenue that I never walked through before.

]]>

I have been in contact with the students in the last few weeks and we decided to concentrate on “the matrix cube problem”. On Sunday, when the week begins, I will need to present the background to the project to all participants of this week, and I have **seven minutes (!!)** for this. As everybody knows, the shorter the presentation, the harder the task is, and the more preparation and thought it requires. So I will take use this blog to practice my little talk.

This project is in the theory of operator spaces. My purpose is to give you some kind of flavour of what the theory is about, and what we will do this week to contribute to our understanding of this theory.

Let be an -dimensional Hilbert space (this just means: an -dimensional inner product space over the complex numbers). Recall that is also a normed space with norm . A basic fact is that every such is * isometrically isomorphic* to the space equipped with with standard inner product

,

which induces the Euclidean norm . This means that there exists a linear isomorphism that preserves the inner product, and in particular, the norm : .

Take two linearly independent vectors , and construct the subspace . **Fact: **no matter how we choose and , is always a -dimensional Hilbert space, i.e., is **isometrically isomorphic** to with the Euclidean norm.

Now let be the space of linear operators on . This space is also normed space, when we give it the norm

.

Using the isometric isomorphism , we will identify with the space of matrices.

Take two linearly independent operators , and construct the subspace . As a linear space, is isomorphic to . However, as a **normed** space might be any one of an uncountable family of two dimensional normed spaces. For example, can be isometrically isomorphic to or to . On the other hand, if we are assuming that is finite dimensional, then is cannot isometrically isomorphic to ! (If we allow for infinite dimensional , then we can get any two dimensional normed space as the span of two operators).

Understanding the normed space boils down to computing the norm:

for every .

It is remarkable how such a simple-minded problem is actually very difficult. What mathematicians can do in difficult situations is try to do one of the following:

**Experiment with examples.**I cannot overstate how much it is important for the health of one’s research that examples be sought and examined. Since calculations regarding the norm of matrices of moderate size require incredibly tedious calculations, it becomes at some point obvious that we should recruit the computer to help us explore what is going on.**Solve the problem for an interesting special case**. For example, suppose that are operators and are**normal**and**commuting**. Then we know that there exists an orthonormal basis in which and . Then calculation of any polynomial in is easy, and in particular .**Reduce the problem to a special case.**For example, the easiest case is when and are scalars, i.e., . The case of two commuting normal operators reduces to the case of scalars ones, because in this case decomposes as the direct sum .

We are led to the question: can we learn something about the case of general operators by using the fact that the problem is solved for commuting normal ones?

Now, a general pair of operators cannot be decomposed into some kind of “sum” of normal commuting pairs. However, we do have the following theorem.

**Theorem.** There exists a constant such that for any two matrices , identified with two operators on , there exists two commuting normal operators such that

**(*)**

and

.

The equality **(*)** gives , so we can get a bound on if we have a reasonable bound on .

We are therefore led to the question: **what is the best possible value of ?**

I have worked on this problem with collaborators, and we have partial results. The optimal constant eludes us, and we are stuck, and we are not sure what the constant should be. What to do? We go back to option no. 1: experiment with examples.

Ok. That is clearly more than seven minutes. To force myself to adhere to the seven minute limit, I made slides (there are four real slides there, according to the rule a slide for every two minutes).

]]>

I returned to the phone conversation. My mother was on the line, and she asked me to talk to my father, who had brain surgery scheduled for tomorrow, and persuade him to get dressed and go to the hospital with her. He accepted my authority. “OK father”, he said.

And then the dean came in. Bla bla bla, some committee of important people something, bla bla, my tenure and promotion to the rank associate professor were approved. Congratulations! “Thank you.”

Later that day I drove to the hospital to see my parents. “Hey, good news – I got tenure.” My father was moved to tears. He thought that tenure was a really big deal. He was right. It is a really big deal. On the next morning he went into surgery, from which he never woke up.

And so the last important thing that I told my father was that I got tenure, and that made him very happy. But I wish that it could have been something else.

]]>

“The first rule of style”, writes Polya, “is to have something to say”.

“The second rule of style is to control yourself when, by chance, you have two things to say; say first one, then the other, not both at the same time”.

Polya’s third rule of style is: “Don’t say what does not need to be said” or maybe “Don’t say the obvious”. I am not sure of the exact formulation, because Polya doesn’t write the third rule down – that would be a violation of the rule!

Polya’s three rules are excellent and one is advised to follow them if one strives for *good style *when writing mathematics. However, style is not the only criterion by which we measure mathematical writing. There is a tradeoff between succinct and elegant style, on the one hand, and clarity and precision, on the other.

“Don’t say the obvious” – sure! But what is obvious? And to whom? A careful writer leaving a well placed exercise in a textbook is one thing. An author of a long and technical paper that leaves an exercise to the poor, overworked referee, is something different. And, of course, a mathematician leaving cryptic notes to his four-months-older self, is the most annoying of them all.

“Don’t say the obvious” – sure, sure! But is it even true? I think that all the mistakes that I am responsible for publishing have originated by an omission of an “obvious” argument. I won’t speak about actual mistakes made by others, but I do have the feeling that some people have gotten away with not explaining something non-trivial, and were lucky that things turned out to be as their intuition suggested (granted, having the correct intuition is also a non-trivial achievement).

I disagree with Polya’s third rule of style. And you see, to reject it, I had to formulate it. QED.

]]>