Most of the time we will stick to the assumption that all Hilbert spaces appearing are separable. This will only be needed at one or two spots (can you spot them?).

In addition to “Exercises”, I will start suggesting “Projects”. These projects might require investing a significant amount of time (a student is not expected to choose more than one project).

**Definition 1:** Let be a von Neumann algebra. Two projections in are said to be * equivalent *(or

Note: a crucial part of the definition is that . One can think of the subspaces corresponding to and as subspace the “look the same in the eyes of .”

**Exercise A: **Murray-von Neumann equivalence is an equivalence relation.

**Exercise B: **Describe when two projections are equivalent in (i) , and (ii) .

Recall that in Lecture 3 (Definition 2), we defined the range projection of an operator to be equal to the orthogonal projection onto ; equivalently is the smallest projection such that . One sometimes denotes , and calls it the ** left support **of . Similarly, the

**Proposition 2: ***If is a von Neumann algebra and , then . *

**Proof: **Use the polar decomposition of to find the partial isometry that provides the equivalence.

**Definition 3:** Let be a von Neumann algebra, and let be projections in . We write , and say that is (Murray-von Neumann) * sub-equivalent to * , if there exists a partial isometry such that and (in other words, if is equivalent to a sub-projection of ). If is sub-equivalent but not equivalent to , then we write .

Murray-von Neumann sub-equivalence is a partial ordering on the set of projections in a von Neumann algebra. The reflexivity and transitivity of this relation is straightforward. We will soon prove that it is anti-symmetric.

**Lemma 3:** *Let and be two families of orthogonal projections. If (respectively, ) for all , then (respectively, ) for all . *

**Proof: **If and [respectively, ] then converges strongly to a partial isometry and , while [respectively, ] (note that as and for , so , and likewise ).

The next proposition is an analogue of the Cantor-Bernstein-Schroder theorem, and shows that sub-equivalence is indeed a partial ordering on .

**Proposition 4:** *If and , then . *

**Proof: **Suppose that

and

and

and .

We define two decreasing sequences of projections and by induction. Write and , and define

and for all .

We have that , by assumption, and , by induction (likewise for ).

Now, since for all , we have

,

where . Likewise,

.

Now, for every , we have by definition that . But then is a partial isometry setting up an equivalence between and . Thus, .

Likewise, and . Now we cleverly put this together, obtaining

.

**Proposition 5:** *For two projections and in a von Neumann algebra , TFAE: *

*(i.e., , where and are the central covers of and ).**.**For all nonzero and , and are not equivalent.*

**Proof:** If 1 holds, then for all , so 2 holds. On the other hand, if , then we set . Then is weakly closed ideal, so by Theorem 6 in Lecture 3 for some . But , so , and therefore . But then , and it follows that . Therefore so , and it follows that . So 2 implies 1.

Next, if and where and , then , thus 2 implies 3.

Finally, suppose that 2 fails. If is in , then , so and . By Proposition 2, , so 3 fails as well.

**Definition 6:** Two projections in a von Neumann algebra satisfying the conditions of the previous proposition are said to be **centrally orthogonal. **

The relation of Murray-von Neumann sub-equivalence is a partial ordering, but it is not full: if and are projections in a von Neumann algebra , it may happen that neither nor hold. The following comparison theorem shows how one may always bring projections to a position where they are comparable.

**Theorem 7 (the comparison theorem):** *If and are projections in a von Neumann algebra , then there exists a projection such that and . *

**Proof: **Let be a maximal pair of projections that satisfy , and . Well, if or then we are done. Otherwise, consider the projections and ; these do not have any subprojections and such that , for otherwise the pair would not be maximal. By the previous proposition, and are orthogonal. We find that , so

,

where we used the fact for all central . Likewise, , so

.

**Corollary:** *In a factor, every two projections are comparable. *

**Definition 8: **Let be a projection in a von Neumann algebra . is said to be:

if is abelian.**abelian**if implies that .**finite**if it is not finite.**infinite**if for every central projection , is either infinite or zero.**properly infinite**if for every projection , is either infinite or zero.**purely infinite**

If the identity in a von Neumann algebra is finite/infinite/properly infinite/purely infinite, then is said to be finite/infinite/properly infinite/purely infinite.

**Examples: **

- In an abelian von Neumann algebra, all projections are abelian.
- Perhaps surprisingly, is finite. In fact, every abelian projection is finite (why?).
- It is easy to see which projections in are finite, which are abelian, which are infinite.
- is properly infinite, but not purely infinite. Can you find an example of a projection in a von Neumann algebra that is infinite but not properly infinite? (I bet you can).
- Can you find an example of a projection in a von Neumann algebra that is purely infinite? (I bet you can’t).

**Corollary (to Proposition 5): ***A nonzero projection in a factor is abelian if and only if it is minimal. *

**Proof: **Let be a nonzero abelian projection. Then it must be minimal, because if , then and are not centrally orthogonal, so by Proposition 5 they dominate a pair of equivalent projections, and this would show that is not abelian.

Conversely, if is minimal, then , so is abelian.

**Exercise C: **Let be a family of centrally orthogonal projections (i.e., for ). If every is abelian (finite), then is abelian (finite).

**Definition 9:** A von Neumann algebra is said to be (of):

if for every nonzero central projection , there exists a nonzero abelian in .**Type I**if has no nonzero abelian projections, but for every nonzero central projection , there exists a nonzero finite in .**Type II**if has no nonzero finite projections (i.e., if is purely infinite).**Type III**

For the sake of addressing an issue that better not be addressed, let us say that the algebra acting on the Hilbert space is a von Neumann algebra of any type.

**Theorem 10:** *Let be a von Neumann algebra. Then there exists a unique decomposition of into a direct sum*

* of a type , a type and type von Neumann algebra. *

**Proof:** If there are no abelian projections in , let . Otherwise, let be a maximal family of centrally orthogonal abelian projections. Then, by Exercise C is abelian. Let be the central cover of . Then is a von Neumann algebra, and we claim that it is of type I. Indeed, if , that is, if is a nonzero central projection in , then is a nonzero abelian projection dominated by (if it was zero, then would contradict that fact that is the central cover of ).

By design, is a von Neumann algebra with no abelian projections. Let be a maximal family of centrally orthogonal finite projections in , and let , which is finite thanks to Exercise C. Now let be the central cover of in . Then is a von Neumann algebra, and as in the previous paragraph, one shows that it is of type II.

Finally, letting , we find that is central, and is a type III von Neumann algebra.

We leave it to the reader to check that the decomposition is unique.

Thus, a von Neumann algebra in general does not have to be of a particular type. But for factors, things are nicer.

**Corollary:** *A factor is either of type I, type II, or type III. *

There is a theory, going back to von Neumann, that describes how every von Neumann algebra can be decomposed uniquely into a direct integral of factors. We shall not go into that direction. Since a considerable amount of interesting work on classification theory is concentrated on factors, and there are many interesting examples, we shall mostly speak about factors.

**Example 11: **As our first example, we note that, trivially, commutative von Neumann algebras are always type I. A slightly deeper fact is this: if ( separable) is an abelian type I algebra, then is also of type I. To see this, let be a central projection. We need to show that dominates an abelian projection in . For this end, let be a nonzero vector, and let . If , then is nonzero and . Then is a cyclic and commutative von Neumann algebra on , and we may also assume that it is singly generated. Therefore, is unitarily equivalent to , so is abelian. Therefore, is abelian, and this shows that is type I.

**Example 12: **As an example at the opposite extreme, let us consider , where is a Hilbert space. We have already seen that this is a factor, and it is a type I factor as a special case of the previous example, since . Alternatively, to see that it is type I, one needs to show that it contains a nonzero abelian projection. But clearly, if is a minimal projection (which must have the form ), then is abelian.

In fact, the previous example contains all type I factors (up to isomorphism, not up to unitary equivalence), but we will have to wait a little bit for this. Before the following lemma, the reader might want to review Proposition 9 in Lecture 3.

**Lemma**:* Let be a von Neumann algebra, and let such that . Then is a *-isomorphism from onto . *

**Proof: **We know that is a WOT continuous and surjective *-homomorphism, and so the kernel of this isomorphism is for a central projection . Therefore, , so is a central element dominating . Since we must have , and is injective.

**Theorem 13:** *Let be a von Neumann algebra. The following conditions are equivalent: *

*is type I.**is type I.**There exists a faithful and WOT continuous representation such that is abelian.*

**Remark: **Before the proof, let us remark that in 3 above, will be a von Neumann algebra, because the unit ball of will be WOT compact.

**Remark: **One last remark before the proof: it also true that is type II (respectively, type III) if and only if its commutant is type II (respectively, type III). You may take this as a challenging:

**Exercise D:** Take care of the remark above (for reference to start with, see Section 9.1 in Kadison-Ringrose, Vol II).

**Proof of Theorem 13:** Suppose that is type I. As in the proof of Theorem 10, let be a maximal sum of abelian projections. We then have . The lemma now implies that . But is the commutant of an abelian von Neumann algebra, so by Example 11, is type I, and therefore is type I.

If is type I, then the previous paragraph (with the roles of and reversed) shows that there exists some so that the map is WOT continuous *-isomorphism onto the commutant of an abelian von Neumann algebra.

Finally, if is abelian, then is type I, so is type I.

**Corollary :** *If is a type I factor, then there is a Hilbert space such that is isomorphic to . *

**Proof: **If is a type I factor, then is clearly a factor too, and it is type I by the theorem. Let be an abelian projection. By the corollary to Proposition 5, is minimal, so as a von Neumann subalgebra of . Thus . Since ( being a factor), the lemma shows that .

**Definition 14: **A type I factor is said to be of * type * if it is isomorphic to where . One write if .

The classification problem of type I factors up to isomorphism is therefore settled: there is exactly one type I factor of type for every cardinal , and there aren’t any other examples up to isomorphism (except non-popular examples living on non-separable Hilbert spaces ).

We also see that the equivalence classes of projections in a type I factor, as a partially ordered space, is isomorphic either to for some or to .

If one wants to classify type I factors up to unitary equivalence, there is another issue that comes in, which are not technically prepared to handle at the moment. Roughly, type I factors look like , where and are Hilbert spaces. The meaning of the tensor notation will be made precise in the upcoming lectures.

Finally, we mention that one can describe all type I algebras on separable Hilbert spaces. Roughly, these are just direct sums of “matrix algebras with coefficients in commutative von Neumann algebras”. We leave it to the interested student to work or dig this out.

**Project 1:** Determine the structure theory of type I von Neumann algebras. You might be able to go a significant part of the way on your own. Once stuck, help can be found in the following references: Conway (A Course in Operator Theory), Kadison-Ringrose (Fundamentals of the Theory of Operator Algebras, Vol. II), or Takesaki (Theory of Operator Algebras, Vol. I).

**Definition 15: **A type II factor is said to be a factor if it is finite (that is, if is a finite projection); otherwise it is said to be a factor.

In this section, we will show that the group von Neumann factors are type . Let us recall some notation. If is a countable group, we let be the standard orthonormal basis of , and let the left and right regular representations be given by

and

.

The (left and right) group von Neumann algebras are defined to be and . In the previous lecture, we saw that and vice versa, and that is a factor if and only if was an ICC group (the conjugacy class of every element, except the identity, is infinite).

Recall that on we defined** the trace**

.

“The trace” is a WOT continuous, faithful tracial state, a notion we recall in the following definition:

**Definition 16: **Let be a C*-algebra. A * trace *on is a positive ( for ) and tracial ( for all ) linear functional. A trace is called a

Sometimes we will just say * trace* instead of “tracial state.”

**Theorem 17:** *Let be a countable ICC group. Then is a type factor. *

**Proof: **We already know that is a factor. Now, is finite, which just means that is a finite projection. Indeed, if and , then and , so . This argument shows that is finite.

Being finite, cannot be type , , or . The only remaining possibilities are for , or . Since is infinite dimensional, only the case remains.

To argue a little more “constructively”, we have to show simply that has no abelian projections (since we already know that it is a finite factor). But if it had an abelian projection, the arguments used in the type I case would show that is isomorphic to , which we have seen cannot happen.

Note that the above proof actually shows that any infinite dimensional factor with a tracial state is a type factor. Let us record this.

**Corollary:** *If is an infinite dimensional factor, and if has a faithful tracial state, then is a type factor. *

Nice, we see that there exist factors. Are there many of them? Yes. We will be able to show that there are *some*, not just there is one. Dusa McDuff proved that there are uncountably many non isomorphic ones, in fact uncountably that arise as group von Neumann algebras.

**Project 2: **Read and present McDuff’s paper “A countable infinity of factors” (there is also a second paper “Uncountably many factors”, if you are ambitious).

What about factors? It turns out that a von Neumann algebra is a (separably acting) type factor, if and only if there exists a type factor such that

for an infinite dimensional separable Hilbert space. We plan to discuss tensor products in the next lecture.

In the previous section we saw that ICC groups give rise to type factors. The fact that these factors are type followed from the existence of a faithful (and WOT continuous) tracial state. It can in fact be shown that the existence of such a tracial state characterizes type factors.

**Theorem 18: ***An infinite dimensional factor is of type if and only if it has a faithful tracial state (“trace”, for short). In this case, the trace is unique, and is in fact WOT continuous. *

Before sketching the idea of the proof of the Theorem, we collect some more definitions and propositions.

**Definition 19: **A von Neumann algebra is said to be * diffuse* if it contains no minimal projections.

Thus a factor is diffuse if and only if it is type II or type III.

**Proposition 20 (the halving lemma): ***Let be a diffuse factor. For every , there exist in , such that . *

**Proof: **Since is not minimal, there is some . By Proposition 5, there are mutually equivalent nonzero projections and , and these satisfy .

Now we consider a maximal family of pairs such that and such that all are mutually orthogonal. Set and . Then , and by maximality (and the first part of the proof) .

**Exercise E:** Use the halving lemma to show that if is a factor with a tracial state , then is the unique tracial state, and it is faithful. Conversely, prove that if is a von Neumann algebra with tracial state , and if is the unique tracial state, then must be a factor (hence a factor).

**Exercise F: **Show that if is an infinite projection, then the halving lemma can be improved: *there exist in , such that * (note the difference: and are also equivalent to ).* *

**Idea of the proof of Theorem 18: **Since all the examples come equipped with such a trace, we will not prove this theorem (at least for now). But let us go over the idea of the proof. The corollary to Theorem 17 says that the existence of a trace implies type .

Conversely, let be a type factor. The factor is finite, and the equivalence classes of projections form a totally ordered set. Since there are no minimal projections, we might think of it as being something like – which is indeed what it turns out to be. Using the halving lemma, we construct inductively a sequence of orthogonal projections such that and (equivalently, ). [Indeed, we start by finding such that , then we throw away and find such that , so and so forth. ]

One then proceeds to show that every the sequence can be used to give a “binary expansion” for every projection, i.e., every is equivalent to sum partial sum (this requires work). One then defines , and uses the binary expansion to define

.

The function , currently defined on , is call * the dimension function*. If this can be extended to a WOT continuous state, there is only one way in which it could, since is generated by its projections. One then works and works to show that this indeed extends to a WOT continuous, faithful tracial state.

Uniqueness you have already shown in Exercise E (by slightly less sophisticated technology) basically follows by the same ideas: the value of a (normalized) trace on must be (because and induction), and this determines that value of on any projection , hence on .

We finish this section by showing that the equivalence classes of projections in a factor is isomorphic (as a partially ordered set) to .

**Theorem 21:** *Let be a factor, and let be the tracial state on . Then , and for any pair of projections, , (resp. ) if and only if )resp. ). *

**Proof: **The first assertion really follows from the proof of the above theorem. Next, if then clearly . If , then , so because of positiveness and faithfulness. This basically finishes the proof.

The (non-normalized) trace on a matrix algebra, when evaluated on a projection, gives the dimension of the range of the projection. The trace on type II factor therefore serves as a kind of generalized “dimension function”. von Neumann was fascinated by the fact the dimension of projections in a type II factor can vary continuously.

**Definition 22: **Let be a von Neumann algebra. A * tracial weight* on is a map such that

- for all and .
- for all .

Some immediate consequences: , for all and , and implies .

**Definition 22 (continued): **A tracial weight is said to be ** normal **if (equivalently, ) for every increasing net . It is said to be

Sometimes, we will abbreviate * semi-finite normal trace* instead of the longer “semi-finite normal tracial weight”.

**Example: **Let be a von Neumann algebra with a tracial state . Then is a tracial weight. (A semi-finite tracial weight for which is said to be a * finite* weight.)

**Example: **Let (with Lebesgue measure), and define by

.

Then is indeed a tracial weight (obvious). It is semi-finite because the Lebesgue measure is regular, but it is not finite. It is normal because of the monotone convergence theorem.

**Example: **Let , and let be an orthonormal basis for . Define by

.

When , this is just the usual trace. When , this is just the sum over the diagonal elements in the matrix representation of in the basis . This is, too, a normal, faithful and semi-finite tracial weight, which is not finite (you can prove this with your bare hands; it will also follow from the proof of Proposition 24 below).

**Proposition 23: ***Let be a nonzero normal semi-finite trace on a factor . Then *

*is faithful.**For every , is infinite if and only if .*

**Remark:** Before the proof, note that this proposition also shows that a normal trace on a factor is faithful.

**Proof: **For (1), we will show that if is normal, semi-finite, and **not** faithful, then it is zero. It suffices to show that , for then positivity implies that for every , giving .

If is not faithful, then there is some such that . Then there is also some nonzero such that . Now let be a maximal family of projections equivalent to . Then , because is maximal. Since is a factor, we have by the corollary to the comparability theorem (Theorem 7) that . Thus

,

and

.

But now additivity and normality of implies that

.

That concludes the proof of (1).

For (2), first note that an infinite projection can be written as , where and are orthogonal projections equivalent to (the case of type I is immediate, and the case of types II and III is taken care of by Exercise F). But then

.

Since part (1) rules out , we must have .

Finally, let be a finite projection. By semi-finiteness, there is a nonzero such that and . Let be a maximal family of subprojections of , such that for all . Since is finite, this family has finitely many elements, say . As above, by maximality, so . Therefore we find

.

**Proposition 24 (existence of tracial weights): ***Every type factor and every type factor have a faithful, normal, semi-finite trace. This trace is unique up to a scalar factor. *

**Proof:** Since every factor type I factor has the form , the example given above (the usual trace ) shows that it carries such a tracial weight when (and if , then the usual trace is a finite tracial state satisfying all conditions). Thus, we need only consider the case of a type factor (the reader will notice though, that the proof could work for type just as well). Moreover, the previous proposition shows that faithfulness is immediate, so we only have to prove that there exists a nonzero normal semi-finite weight.

Since is type II, it has a nonzero finite projection .

**Claim:** *Let be a nonzero finite projection in factor. **Then there exists a family of orthogonal and projections, such that for all , and such that . *

Assuming the claim for the moment, we prove the existence of a semi-finite normal tracial weight as follows. The von Neumann algebra is finite, so it has a WOT continuous trace defined on it. Let be partial isometries so that , and . We define

by

.

First of all, this is well defined, because , and the summands are all non-negative. So we have a map , and by properties of infinite summation of non-negative numbers, we have the first item of Definition 22.

Let us drop the habit of skipping details that we have picked up, and show that is normal, semi-finite, tracial weight.

We begin by showing for every increasing net . Write . By positivity,

for all , so .

For the reverse, suppose that , and that . Our goal is to show that for all “sufficiently large” .

On the one hand, there exists a finite set of indices such that

.

(In case that acts on a separable Hilbert space, the family is an infinite sequence, and we could say that there exists an integer such that .)

On the other hand, we have

for all , because is WOT continuous. So there is some , such that

for all . For such , we find

.

This shows that , and normality is established.

Next, let us show that is tracial. Since we have already dealt with normality, the following formal calculations are legal:

.

Equations (*) follow from (SOT) and (**) follows from being a trace on .

It remains to show that is semi-finite. For every finite subset of indices , let . Now, for every , is a finite projection. If , then the net converges SOT to . Therefore, there is some such that . Now – a finite von Neumann algebra. Therefore, there is a projection , such that and . This shows that is semi-finite.

Finally, the uniqueness of follows from uniqueness of the trace on a finite type II factor. It seems like the good time to revert to the habit of skipping details

**Definition 25:** A von Neumann algebra is said to be * semi-finite *if it is type I or type II.

Thus, a factor is semi-finite if and only if it is not type III. We have seen that semi-finite algebras have normal, faithful, semi-finite traces. The converse will be established below. In the meanwhile, I did not forget that we owe ourselves the following:

**Proof of claim:** Let be a maximal family of orthogonal projections such that . Then by maximality, so . If , then we put , and we are done.

Assume that . Let be a partial isometry such that and . Note that , so this implies that the family is infinite (see Exercise G below). For the proof, we will assume that this family is an infinite sequence (by the end of the proof, it should be clear what to do if the cardinality of the family is greater than ; if acts on a separable Hilbert space, then of course the cardinality cannot be strictly greater than ).

Now, being equivalent to , every breaks up as , where and . Now we define a new family of orthogonal projections, by

and

.

Then and .

**Exercise G:** Prove that if and is finite, then is finite. Prove that if and is finite, then is finite. Prove that if are finite and orthogonal projections, then is finite.

After Murray and von Neumann initiated the program of classification into types, they determined all type I algebras and gave examples of type II factors, but at first it was not known whether there exist type III algebras. Then von Neumann provided an example, and later Powers found uncountably many examples, and the classification problem for type III von Neumann algebras is still today a very active field of research. We will see examples of type III factors later on in this course. For now, we record the following result that is one of the technical keys for showing that a factor is type III.

**Proposition 26: ***A factor is of type III if and only if there does not exist a semi-finite normal trace on . *

**Proof:** We already know, by the previous proposition, that if is not type III, then there exists a normal semi-finite trace on it. On the other hand, if is type III, then all projections in are infinite. Proposition 23(2) now tells us that if there was a semi-finite normal trace on , then necessarily for all , but such a weight cannot be semi-finite. This completes the proof.

]]>

The purpose of this book is to serve as the accompanying text for a first course in functional analysis, taken typically by second- and third-year undergraduate students majoring in mathematics. As I prepared for my first time teaching such a course, I found nothing among the countless excellent textbooks in functional analysis available that perfectly suited my needs. I ended up writing my own lecture notes, which evolved into this book (an earlier version appeared on my blog).

The main goals of the course this book is designed to serve are to introduce the student to key notions in functional analysis (complete normed spaces, bounded operators, compact operators), alongside significant applications, with a special emphasis on the Hilbert space setting. The emphasis on Hilbert spaces allows for a rapid development of several topics: Fourier series and the Fourier transform, as well as the spectral theorem for compact normal operators on a Hilbert space.

I did not try to give a comprehensive treatment of the subject, the opposite is true. I did my best to arrange the material in a coherent and effective way, leaving large portions of the theory for a later course. The students who finish this course will be ready (and hopefully, eager) for further study in functional analysis and operator theory, and will have at their disposal a set of tools and a state of mind that may come in handy in any mathematical endeavor they embark on.

The text is written for a reader who is either an undergraduate student, or the instructor in a particular kind of undergraduate course on functional analysis. The background required from the undergraduate student taking this course is minimal: basic linear algebra, calculus up to Riemann integration, and some acquaintance with topological and metric spaces (in fact, the basics of metric spaces will suffice; and all the required material in topology/metric spaces is collected in the appendix).

Some “mathematical maturity” is also assumed. This means that the readers are expected to be able to fill in some details here and there, not freak out when bumping into a slight abuse of notation, and so forth. (For example, a “mathematically mature” reader needs no explanation as to what mathematical maturity is :-).

This book is tailor-made to accompany the course *Introduction to Functional Analysis* given at the Technion — Israel Institute of Technology. The official syllabus of the course is roughly: basic notions of Hilbert spaces and Banach spaces, bounded operators, Fourier series and the Fourier transform, the Stone-Weierstrass theorem, the spectral theorem for compact normal operators on a Hilbert space, and some applications. A key objective, not less important than the particular theorems taught, is to convey some underlying principles of modern analysis.

The design was influenced mainly by the official syllabus, but I also took into account the relative place of the course within the curriculum. The background that I could assume (mentioned above) did not include courses on Lebesgue integration or complex analysis. Another thing to keep in mind was that besides this course, there was no other course in the mathematics undergraduate curriculum giving a rigorous treatment of Fourier series or the Fourier transform. I therefore had to give these topics a respectable place in class. Finally, I also wanted to keep in mind that students who will continue on to graduate studies in analysis will take the department’s graduate course on functional analysis, in which the Hahn-Banach theorems and the consequences of Baire’s theorem are treated thoroughly.

This allowed me to omit these classical topics with a clean conscience, and use my limited time for a deeper study in the context of Hilbert spaces (weak convergence, inverse mapping theorem, spectral theorem for compact normal operators), including some significant applications (PDEs, Hilbert functions spaces, Pick interpolation, the mean ergodic theorem, integral equations, functional equations, Fourier series and the Fourier transform).

An experienced and alert reader might have recognized the inherent pitfall in the plan: how can one give a serious treatment of spaces, and in particular the theory of Fourier series and the Fourier transform, without using the Lebesgue integral? This is a problem which many instructors of introductory functional analysis face, and there are several solutions which can be adopted.

In some departments, the problem is eliminated altogether, either by making a course on Lebesgue integration a prerequisite to a course on functional analysis, or by keeping the introductory course on functional analysis free of spaces, with the main examples of Banach spaces being sequence spaces or spaces of continuous functions. I personally do not like either of these easy solutions. A more pragmatic solution is to use the Lebesgue integral as much as is needed, and to compensate for the students’ background by either giving a crash course on Lebesgue integration or by waving one’s hands where the going gets tough.

I chose a different approach: hit the problem head on using the tools available in basic functional analysis. I define the space to be the completion of the space of piecewise continuous functions on equipped with the norm , which is defined in terms of the familiar Riemann integral. We can then use the Hilbert space framework to derive analytic results, such as convergence of Fourier series of elements in , and in particular we can get results on Fourier series for honest functions, such as convergence for piecewise continuous functions, or uniform convergence for periodic and functions.

Working in this fashion may seem clumsy when one is already used to working with the Lebesgue integral, but, for many applications to analysis it suffices. Moreover, it shows some of the advantages of taking a functional analytic point of view.

I did not invent the approach of defining spaces as completions of certain space of nice functions, but I think that this book is unique in the extent to which the author really adheres to this approach: once the spaces are defined this way, we never look back, and *everything* is done with no measure theory.

To illustrate, in Section 8.2 we prove the mean ergodic theorem. A measure preserving composition operator on is defined first on the dense subspace of continuous functions, and then extended by continuity to the completion. The mean ergodic theorem is proved by Hilbert space methods, as a nice application of some basic operator theory. The statement (see Theorem 8.2.5) in itself is significant and interesting even for piecewise continuous functions — one does not need to know the traditional definition of in order to appreciate it.

Needless to say, this approach was taken because of pedagogical constraints, and I encourage all my students to take a course on measure theory if they are serious about mathematics, *especially* if they are interested in functional analysis. The disadvantages of the approach we take to spaces are highlighted whenever we stare them in the face; for example, in Section 5.3, where we obtain the existence of weak solutions to PDEs in the plane, but fall short of showing that weak solutions are (in some cases) solutions in the classical sense.

The choice of topics and their order was also influenced by my personal teaching philosophy. For example, Hilbert spaces and operators on them are studied before Banach spaces and operators on them. The reasons for this are **(a) **I wanted to get to significant applications to analysis quickly, and **(b)** I do not think that there is a point in introducing greater generality before one can prove significant results in that generality. This is surely not the most efficient way to present the material, but there are plenty of other books giving elegant and efficient presentations, and I had no intention — nor any hope — of outdoing them.

A realistic plan for teaching this course in the format given at the Technion (13 weeks, three hours of lectures and one hour of exercises every week) is to use the material in this book, in the order it appears, from Chapter 1 up to Chapter 12, skipping Chapters 6 and 11. In such a course, there is often time to include a section or two from Chapters 6 or 11, as additional illustrative applications of the theory. Going through the chapters in the order they appear, skipping chapters or sections that are marked by an asterisk, gives more or less the version of the course that I taught.

In an undergraduate program where there is a serious course on harmonic analysis, one may prefer to skip most of the parts on Fourier analysis (except convergence of Fourier series), and use the rest of the book as a basis for the course, either giving more time for the applications, or by teaching the material in Chapter 13 on the Hahn-Banach theorems. I view the chapter on the Hahn-Banach theorems as the first chapter in further studies in functional analysis. In the course that I taught, this topic was given as supplementary reading to highly motivated and capable students.

There are exercises spread throughout the text, which the students are expected to work out. These exercises play an integral part in the development of the material. Additional exercises appear at the end of every chapter. I recommend for the student, as well as the teacher, to read the additional exercises, because some of them contain interesting material that is good to know (e.g., Gibbs phenomenon, von Neumann’s inequality, Hilbert-Schmidt operators). The teaching assistant will also find among the exercises some material better suited for tutorials (e.g., the solution of the heat equation, or the diagonalization of the Fourier transform).

There is no solutions manual, but I invite any instructor who uses this book to teach a course, to contact me if there is an exercise that they cannot solve. With time I may gradually compile a collection of solutions to the most difficult problems.

Some of the questions are original, most of them are not. Having been a student and a teacher in functional and harmonic analysis for several years, I have already seen many similar problems appearing in many places, and some problems are so natural to ask that it does not seem appropriate to try to trace who deserves credit for “inventing” them. I only give reference to questions that I deliberately “borrowed” in the process of preparing this book. The same goes for the body of the material: most of it is standard, and I see no need to cite every mathematician involved; however, if a certain reference influenced my exposition, credit is given.

The appendix contains all the material from metric and topological spaces that is used in this book. Every once in while a serious student — typically majoring in physics or electrical engineering — comes and asks if he or she can take this course without having taken a course on metric spaces. The answer is: yes, if you work through the appendix, there should be no problem.

There are countless good introductory texts on functional analysis and operator theory, and the bibliography contains a healthy sample. As a student and later as a teacher of functional analysis, I especially enjoyed and was influenced by the books by Gohberg and Goldberg, Devito, Kadison and Ringrose, Douglas, Riesz and Sz.-Nagy, Rudin, Arveson, Reed and Simon, and Lax. These are all recommended, but only the first two are appropriate for a beginner. As a service to the reader, let me mention three more recent elementary introductions to functional analysis, by MacCluer, Hasse, and Eidelman-Milman-Tsolomitis. Each one of these looks like an excellent choice for a textbook to accompany a first course.

I want to acknowledge that while working on the book I also made extensive use of the Web (mostly Wikipedia, but also MathOverflow/StackExchange) as a handy reference, to make sure I got things right, e.g., verify that I am using commonly accepted terminology, find optimal phrasing of a problem, etc.

This book could not have been written without the support, encouragement and good advice of my beloved wife, Nohar. Together with Nohar, I feel exceptionally lucky and thankful for our dear children: Anna, Tama, Gev, Em, Shem, Asher and Sarah.

I owe thanks to many people for reading first drafts of these notes and giving me feedback. Among them are Alon Gonen, Shlomi Gover, Ameer Kassis, Amichai Lampert, Eliahu Levy, Daniel Markiewicz, Simeon Reich, Eli Shamovich, Yotam Shapira, and Baruch Solel. I am sorry that I do not remember the names of all the students who pointed a mistake here or there, but I do wish to thank them all. Shlomi Gover and Guy Salomon also contributed a number of exercises. A special thank you goes to Michael Cwikel, Benjamin Passer, Daniel Reem and Guy Salomon, who have read large portions of the notes, found mistakes, and gave me numerous and detailed suggestions on how to improve the presentation.

I bet that after all the corrections by friends and students, there are still some errors here and there. Dear reader: if you find a mistake, please let me know about it! I will maintain a page on my personal website in which I will collect corrections.

I am grateful to Sarfraz Khan from CRC Press for contacting me and inviting me to write a book. I wish to thank Sarfraz, together with Michele Dimont the project editor, for being so helpful and kind throughout. I also owe many thanks to Samar Haddad the proofreader, whose meticulous work greatly improved the text.

My love for the subject and my point of view on it were strongly shaped by my teachers, and in particular by Boris Paneah (my Master’s thesis advisor) and Baruch Solel (my Ph.D. thesis advisor). If this book is any good, then these men deserve much credit.

My parents, Malka and Meir Shalit, have raised me to be a man of books. This one, my first, is dedicated to them.

]]>

As for exercises:

**Exercise A: **Prove that has the ICC property.

**Exercise B: **Prove that there is an increasing sequence of von Neumann subalgebras of , such that is *-isomorphic to and such that .

**Exercise C: **Prove that the free group () has the ICC property.

**Exercise D: **Prove that . What can you say about ? (May require more advanced material: What can you say about , where is a countable discrete abelian group?).

**Exercise E:** We will later see that is not isomorphic to . It might be a nice exercise to think about it now (it might also be not a nice exercise, take your chances).

**Exercise F: **Let be a left convolver, and let be the corresponding convolution operator. Find the adjoint .

**Exercise G: **Prove that is a commutative group, if and only if (or ) is commutative, and that this happens if and only if .

**Exercise H:** Prove that (where is the usual trace) is the unique linear functional on that satisfies and for all .

]]>

]]>

So far (the first two lectures and in this one), the references I used for preparing these notes are Conway (A Course in Operator Theory) Davidson (C*-algebras by Example), Kadison-Ringrose (Fundamentals of the Theory of Operator Algebras, Vol .I), and the notes on Sorin Popa’s homepage. But since I sometimes insist on putting the pieces together in a different order, the reader should be on the look out for mistakes.

We begin by proving an interesting rigidity property of *-homomorphisms from C*-algebras: they are automatically contractive, and if they are injective then they are isometric.

**Theorem 1:** *Let be a (concrete) C*-algebra possessing a unit, and let be a *-homomorphism (meaning that is a linear map that also satisfies and ). Then for all . Moreover, if is injective, then for all . *

In the above theorem, a C*-algebra possessing a unit is simply an algebra that has a multiplicative identity element, not necessarily . If we want to say that the unit of is actually equal to , then we will say that is a * unital C*-subalgebra of *.

For the proof, we require the following lemma, which is sometimes referred to as **the spectral permanence theorem**. If is a unital Banach algebra, and , then the * spectrum of relative to * is the set

has no inverse in .

Thus, the spectrum of an operator as we defined it in the first lecture, is the spectrum relative to , i.e., . It is conceivable that if is an element of a unital C*-algebra , then is bigger than (in other words, it is possible, that an element in has an inverse in which is not contained in ). One of the remarkable properties of C*-algebras is that this does not happen.

**Lemma:** *If is an operator in a unital C*-subalgebra , then is an invertible operator if and only if has an inverse . Consequently, for every unital C*-subalgebras , *

.

**Proof of the lemma:** Consider first the case of a selfadjoint operator . Then by the spectral theorem, we may as well assume that is a multiplication operator . A moment of thought reveals that if is a bounded invertible operator, then the inverse must be equal to . Since by the bounded inverse theorem is bounded, we find that there is some such that almost everywhere. This implies that Now, the function is continuous on , and by the continuous functional calculus we have that is contained in the C*-algebra generated by and .

If is a general invertible element in , then is also invertible, contained in , and is selfadjoint. Thus, by the previous paragraph, , so is in .

Finally, the assertion regarding spectrum follows immediately from the assertion about invertible operators.

**Exercise A: **Give an example of an operator and a unital norm closed operator subalgebra containing such that

.

**Proof of Theorem 1: **We may assume that is a unital C*-subalgebra of and that (because otherwise, and are projections, and everything orthogonal to the ranges of these projections is irrelevant). For every , if is an invertible operator, then it is invertible in , and it follows that is invertible in (with inverse ). Thus, . Then, for every ,

but – as – the right hand side is less than . We find that for every .

In particular, we find that every *-homomorphism is continuous.

Now, suppose that is injective, and assume for contradiction that is not isometric. Then there exists some such that and (the fact that we can assume that there is a positive element on which norm preservation fails follows from the C* identity). Let be such that on , and . Since for every polynomial, and since is continuous, we have that . By the continuous functional calculus, , so , but this is a contradiction to injectivity, because .

**Remark: **In these notes we are working with concrete C*-algebras. We have reached a point that nicely exemplifies the deficiency in this concrete approach. If we were using abstract C*-algebras, we would easily be able to show that the image of a C*-algebra under a *-homomorphism is a C*-algebra (the key issue is that it is closed). This is done by noting that, since a *-homomorphism is continuous, its kernel is a closed ideal. Therefore, we would be able to take the quotient , and we get an injective *-homomorphism which must be isometric, hence its image is closed. But the image of is equal to the image of , so the image of is a C*-algebra. Thus, we see that the abstract approach has significant advantages, even at the early parts of the theory.

**Definition 2:** For every , the projection onto is called the **range projection*** of * (also called the

**Exercise B:** Prove that if is a von Neumann algebra and , then . (**Hint:** First, assume that , and without loss of generality assume , and consider the monotone sequence (note that having the functional calculus always on your mind, this is a natural sequence to consider). To see the result for general operators, find the relationship between and .)

If is a family of projections on , we define to be the projection on the closed subspace spanned by the subspaces , and we define to be the projection onto the intersection . The projections and are called the * sum* and

**Proposition 3: ***If all the projections in the family are contained in a von Neumann algebra , then the sum and intersection are also in . *

**Proof:** It is enough to prove the claim for the sum (why?). Recall that a projection belongs to if and only if the range of is invariant for (see the lemma used in the proof of the double commutant theorem, in Lecture 2). Reversing the role of and , we see that if every is in , then every space is invariant for . It follows that is invariant for , thus .

The following two exercises give an alternative way of proving the above proposition.

**Exercise C:** If are projections, then and ( if this is difficult, consult Kadison-Ringrose, Vol. I, Section 2.5).

**Exercise D:** Prove that if is a von Neumann algebra, and if , then and , by making use of the previous exercise, and applying the theorem on monotone nets of operators.

**Definition 4: **The * center* of a von Neumann algebra is defined to be . A projection is said to be a

The center of a von Neumann algebra is an commutative von Neumann algebra. Note that the although the commutant of a von Neumann algebra depends on the particular representation of the von Neumann algebra and not on the *-algebraic structure, the center depends only on the *-algebraic structure.

**Definition 5:** A von Neumann algebra is said to be a * factor* if .

Clearly, a von Neumann algebra is a factor if and only it has no central projections.

**Examples: **

- The algebra is a factor (we have essentially seen already that so ). At the other extreme, every commutative von Neumann algebra is its own center. The only commutative von Neumann algebra that is a factor is .
- We will see in a later lecture that if is a countable group and if for every , the conjugacy class is infinite, then the group von Neumann algebra is a factor. (A group for which for every is called an ICC group; examples of countable ICC groups are the free groups , and the group consisting of all permutations of the natural numbers that fix all but finitely many elements.)

In a way that can be made precise (but will probably not be made precise in this course) factors form the “building blocks” of von Neumann algebras. We will now see that is a factor if and only if it has no weakly closed ideals, so that the factor are in a sense the “simple” von Neumann algebras.

If , then it is easy to see that is a *-subalgebra of , and in fact, it is WOT closed. To see that it is WOT closed, note that every element in can be identified with the operator . The map can therefore be considered as a (contractive) *-homomorphism of onto , which maps onto (onto, because this map is “the identity” on ). Since this map is WOT continuous and is WOT compact, we obtain that is WOT compact. Invoking Corollary 15 of the previous lecture, we see that can be considered to be a von Neumann algebra in . It follows that is a WOT closed, *-closed, two sided ideal in . Note that in this case we have the decomposition , i.e.,

.

It turns out that all WOT closed (two sided) ideals have this form.

**Theorem 6: **Let be a von Neumann algebra, and let be a WOT closed two sided ideal. Then , and, moreover, there exists a central projection such that .

**Exercise E: **Prove Theorem 4. (**Hint: **To prove that is selfadjoint, use the the polar decomposition. To prove the existence of the form , note that the projection , if it exists, must be equal to the sup of all projections in , thus one can define as the supremum and prove that such a supremum does what we want. To show that there are sufficiently many projections in use the Borel functional calculus. The notion of range projection (see the next section) will also be useful for proving that .)

**Corollary 7: ***A von Neumann algebra is a factor if and only if has no non-trivial WOT closed ideals. *

**Proof: **Indeed, is a factor precisely when has no non-trivial projections, and this corresponds to the situation where there are no non-trivial WOT closed ideals.

**Definition 8:** Let be a von Neumann algebra. For every operator , the * central cover* of (also called the

.

**Exercise F:** Suppose that is a projection in a von Neumann algebra . Prove that is the orthogonal projection onto the subspace

.

Suppose that is a von Neumann algebra, an let be a projection. We can define a set by

.

If is either in or in , then is a *-algebra, called * the reduced* or

**Proposition 9:** *Let be a von Neumann algebra, and let . Then and are both von Neumann algebras on , and are mutual commutants: . *

**Proof: **To prove that that is a von Neumann algebra, one runs an arguent similar to the case where , which we treated above. We will show that is a von Neumann algebra (this will also follow from , but the proof is interesting in itself).

To see that is a von Neumann on , we consider the map . is a WOT continuous *-homomorphism. Its kernel is therefore a WOT closed ideal, hence, by Theorem 4, for some . On the map must be injective. Therefore, it is isometric (Theorem 1). So , so the latter – as the WOT continuous image of a WOT compact set – is WOT compact, and therefore – by Corollary 15 in the previous lecture – is a von Neumann algebra.

Let . Then for all ,

,

and this shows that .

For the reverse inclusion (which we will prove as , using the fact that we already established that is a von Neumann algebra), it suffices to show that every unitary is the compression of some (recall Exercise D in the first lecture, which shows that a C*-algebra is spanned by its unitary elements). For such a , we define an operator by

,

for and , and extending linearly. We see that

,

for , and .

So, is an isometry on . We extend to be a partial isometry on by defining on . (The partial isometry satisfies that is the orthogonal projection onto , that is, it is (computed in )). For every we have (1) for all and

and (2) for every we have , so

.

Therefore, , so . But , and the proof is complete.

**Exercise G:** Prove that also in the case that , it also holds that is a von Neumann algebra and that . Moreover, show that if or , then .

**Theorem 10: ***Let be a separable Hilbert space and a commutative von Neumann algebra. Then there is a selfadjoint operator such that . *

**Proof:** By Corollary 15 in Lecture 2, the unit ball of is WOT compact, and by Exercise E in that lecture, it is metrizable. Thus, is separable (as every compact metric space is). Since every selfadjoint operator can be approximated by its spectral projections associated with intervals with rational endpoints, we see that there is a sequence that generates as a von Neumann algebra.

We now claim that the operator generates as a von Neumann algebra. To establish this, it is enough to show that for all . And really, it suffices to concentrate on showing that , because if that’s true then will be in and one can proceed inductively.

We have the direct sum (recall that is a commutative algebra). Let as above. Since ,

.

It follows that . Likewise, , so , therefore .

We see that . Therefore, is continuous on , and , as required.

**Definition 11:** Let be a *-algebra. A vector is said to be * cyclic* if . It is said to be

**Lemma 12:** If is cyclic for a *-algebra , then is separating for . The converse is also true when is non-degenerate.

**Proof: **Exercise (easy).

**Proposition 13:** *Every commutative von Neumann algebra on a separable Hilbert space has a separating vector.*

**Proof:** We will prove that for every von Neumann algebra has a cyclic vector; the result then follows from the lemma above.

As in Exercise I of Lecture 1, we may write as a direct sum of “cyclic subspaces”, where for some . Since every is invariant for , the projection onto belongs to (the inclusion following from commutativity). It follows that the vector is cyclic for . Indeed, for every , , so , and this completes the proof.

We now describe what commutative von Neumann algebras “look like”, and this will lead to a classification of all commutative von Neumann algebras.

Recall that the * support* of a Borel measure on , denoted , is the closure of the set of all points , that satisfy

for every neighborhood , .

A measure is said to be * compactly supported *if its support is compact.

**Theorem 14: ***Let be a commutative von Neumann algebra on a separable Hilbert space. Then there exists a regular, compactly supported, Borel probablity measure on , such that is *-isomorphic to . *

**Proof:** Let be a separating vector for , given by Proposition 13. Let , and . Then , so is a *-homomorphism from onto . In fact, it is a *-isomorphism, because is a separating vector. Will show that is unitarily equivalent to for some .

Let be an operator – the existence of which is guaranteed by Theorem 10 – for which . As the map is a WOT continuous *-isomorphism, we have that is generated as a von Neumann algebra by . The vector is cyclic for , so by a previous result (see Exercise I in Lecture 2) we have that is unitarily to , where is as we require. That completes the proof.

Now it is our goal to understand what kind of C*-algebras arise as for a regular, compactly supported, Borel probablity measure on the real line.

Given such a measure, let . Then the cardinality of is at most . Define by

and by

.

Generally, a measure is said to be * discrete *if , and

**Exercise H: **Prove that is *-isomorphic to .

**Exercise I:** Prove that if is a discrete probability measure, then there exists a countable set such that is *-isomorphic to .

It remains to understand for finite continuous measures. It turns out that the only von Neumann algebra that arises this way, is (where the measure is the Lebesgue measure).

**Theorem 15:** *Let be a compactly supported and continuous regular Borel probability measure on the real line. Then is *-isomorphic to . *

**Proof: **Assume that the support of is contained in , and assume also that . Thus, we can think of as a measure on . It will be convenient also to set , and to think of as a Borel probability measure on . It is straightforward to show that , and we switch back and forth between the two viewpoints as convenient.

We will construct a map such that is a *-isomorphism of onto .

The map is defined by

.

(In order for this to have meaning it is convenient to think of as a measure on . In fact we can also think of as a map defined ). Then is a non-decreasing function, so it is Borel measureable. In fact, the map is strictly increasing on , with the exception of a countable number of pairs for which (there at most such pairs, because for all i, the interval is disjoint from but contained in , thus the total length of these intervals is finite). Moreover, is onto, because is continuous. We can therefore invert , missing at most countably many points in . The map is also increasing, so it Borel measurable.

Now, is measure preserving. By this, we mean that if , then is equal to the Lebesgue measure of . Since both measures are regular, and since maps intervals to intervals, it is enough to consider the case of . Let such that . Then

,

but the latter is the Lebesgue measure of , as required. The converse is also true, so preserves the measure in both directions.

Now one has to show that the map is a well defined *-isomorphism of onto . The final details are left to the reader.

**Conclusion:** *Every commutative von Neumann algebra on a separable Hilbert space has one of the following forms (up to *-isomorphism): *

- ,
- , for a countable set ,
- , for a countable set .

]]>

On we have the topology generated by the operator norm. There are other natural topologies that may be used.

**Definition 1: **The ** weak operator topology (WOT**, sometimes also called the

.

where , and (recall that this means that a basis for this topology is given by finite intersections of the above sets). In fact, it suffices to consider sets of the form (why?) Equivalently, a net converges to in the weak operator topology if and only if for all .

**Remark: **It is common to call the weak operator topology “the weak topology”, for short (and also because that’s how von Neumann called it). Since is a Banach space, the term “weak topology” can also mean the topology generated on by the space of bounded functionals on ; however, *that* weak topology is rarely used, so this abuse of terminology hardly ever causes confusion .

**Definition 2:** The * strong operator topology (SOT*, sometimes also called the

.

where , and . As above, it suffices to consider sets of the form . Equivalently, a net converges to in the strong operator topology if and only if (in the norm) for all .

**Remark:** Once again, the terminology “strong topology” is used in Banach space theory to refer to the norm topology, in contrast to the weak topology. However, in our setting, since “weak topology” does not mean the topology on induced by the Banach space dual , “strong topology” is usually not used for the norm topology, but for the strong operator topology.

A beginner might be tempted to think that, since the norm topology comes from a norm and in particular is metric space topology, it should be more convenient to work with than the above two topologies (it turns out that the weak and strong topologies are not metrizable).* Why would one want to work with a weaker topology? *One reason is that a closed set has better chances of being compact in a weaker topology . Another reason is that closures are bigger, hence richer. For example, we have already seen that the spectral projections of a selfadjoint operator , are not in the norm closed algebra generated by , but rather in the weak/strong closed algebra generated by (these turn out to be the same, see below).

There are other natural topologies, we shall encounter them later.

It is easy to see that the weak and strong topologies are Hausdorff, and that addition and scalar multiplication are jointly continuous. If , then the maps and are WOT and SOT continuous. The adjoint operation is WOT continuous. Indeed, if wot, then

.

However, the adjoint operation is not SOT continuous. For example, if denotes the unilateral shift on (that is, ), then converges SOT to as , but does not.

**Exercise A: **Prove that the strong topology is strictly stronger than the weak topology.

Multiplication is not jointly continuous, WOT nor SOT, but it is SOT continuous when restricted to bounded sets.

**Exercise B: **Show that multiplication is not jointly SOT continuous (this requires finding a tricky example). However, show that the map given by is SOT continuous (here, denotes the closed unit ball of ). What about joint WOT continuity of multiplication?

**Exercise C:** Explain why the previous exercise implies that the strong topology is not metrizable. What about the weak topology – is it metrizable?

**Theorem 3: ***The closed unit ball of is WOT compact.*

**Proof:** The proof is similar to the proof of Alaoglu’s theorem. For every , define

.

Let , equipped with the product topology. By Tychonoff’s theorem, is compact. We consider as a subspace of the space of all functions , equipped with the topology of pointwise convergence, which is the same thing as the product topology.

Now let be defined by

.

is injective, and by definition, is a homeomorphism between with the weak operator topology and with the topology of pointwise convergence. To show that is compact, it suffices to show that is closed.

Now, if , then it is not hard to show that must be a sesqui-linear form on and it is bounded in the sense that . It follows from a familiar consequence of the Riesz representation theorem that there is some , such that for all , therefore , as required.

**Exercise D: **Show that with the strong topology is not compact.

**Exercise E: **If is separable, then is metrizable in both the weak and the strong operator topologies. The metric for with the weak topology is

,

where is a dense sequence in the unit ball of . Fill in the rest of the details.

We recall some definitions, and give a couple of new ones.

**Definition 4: **A -subalgebra of that is SOT closed and contains is called a **von Neumann algebra. **

**Definition 5: **For a set , * the commutant of * is the set defined

for all .

**Exercise F: **Prove that and that .

**Definition 6: **The * null space *of a subset is the set

.

**Definition 7: **A subset is said to be * non-degenerate* if (where denote the closed linear span of a set ).

**Exercise G: **A -algebra is non-degenerate if and only if .

**Theorem 8 (von Neumann’s double commutant** **theorem):** *Let be a -subalgebra with a trivial null space. Then *

*. *

*In particular, a -algebra is a von Neumann algebra if and only if . *

Before the proof, we will prove a basic lemma, which will also emphasize an important role that the commutant plays.

**Lemma: ***Let be a -subalgebra, let be an orthogonal projection, and set . Then is invariant for if and only if . *

**Proof: **For , if and only if for all , which happens if and only if . Therefore, if and only if , or . Therefore, if is invariant for , then for every , the adjoint too, so , therefore . This argument works backwards as well.

**Proof of the double commutant theorem:** As is WOT closed, it holds that

.

Now let . To show that , we have to find, for every , an operator such that for all .

**Case 1: . **Let be the orthogonal projection onto the subspace . Then by the lemma, . Now, , so the assumption implies that , or .

Now since and , we have

.

This means that there is some such that .

**Case 2: . **Define ( times), and for every operator , define in (the last equality should be clear, if it is not **STOP NOW!**).

Put . Then is a subalgebra of . A calculation shows that , and that .

Now put , where . Applying Case 1 to the algebra and the vector , we find such that , and this implies for all .

**Corollary 9:** *A unital -subalgebra is a von Neumann algebra if and only if . *

**Exercise H:** Prove that every von Neumann algebra is the commutant of the image of a unitary representation.

**Corollary 10:** *Let , and suppose that is its polar decomposition. Then both and are in (the von Neumann algebra generated by ). *

**Proof: **We have already seen that . To show that , we will show that commutes with every (here we are using the double commutant theorem). Given such , we have

.

It follows that for every . To finish the proof we need to show that for every . By the definition of , for , so . On the other hand, if , then , so . Thus for , and we are done.

**Example:** If , then , and . More generally, if , then , and .

The first equality follows from the fact that has no invariant subspaces, thus the only projections in are and . Since a von Neumann algebra is generated by its projections, .

Similarly, if (the algebra of compact operators), then and (that the SOT/WOT closure of is is also easy to prove directly, but note how it just falls out of the air here).

**Example:** Let be a probability space, and consider the algebra , where we are using the notation introduced in the previous lecture:

for .

If you check carefully the notes of the previous lecture, you will see that we never really proved that this algebra is SOT closed. We shall now see that , and it will follow that is a von Neumann algebra. It is common to abuse notation and write .

Of course, because is commutative. Let . If was in , then and we would be able to recover as . Here, denotes the constant function with value everywhere on , which is in because is a probability measure. We therefore define , and hope to be able to show that – originally only known to be in – is bounded, and that . But this is easy: for every function ,

,

therefore . It follows (for example, by considering characteristic functions of sets of finite measure) that , and the equation above shows that agrees with on the space , which is dense in . Since both operators are bounded, they are equal.

Surely one can prove directly that is a strongly closed algebra, but the above simple computation shows the beauty and power of the double commutant theorem: an analytical problem (involving, perhaps, the limits of convergent unbounded nets) is reduced to a rather simple minded algebraic problem: computing the commutant.

**Example:** Suppose that is a locally compact Hausdorff space, and that is a regular Borel measure on . Identify the algebra of continuous, compactly supported functions on , with the algebra of operators . Using an argument similar to the one above, one can show that .

**Exercise I:** Prove that the von Neumann algebra generated by a selfadjoint operator that has a cyclic vector, is unitarily equivalent to , where is a probability measure on . Give an example of an operator (necessarily, non-cyclic) such that the von Neumann algebra that it generates is **not **unitarily equivalent to (for any probability space ). Your example is still related to – how so?

**Theorem 11: ***Let be an increasing net of selfadjoints in a von Neumann algebra . Suppose that this net is bounded above, in the sense that there is some some operator such that for all . Then is SOT convergent to some , and is the least upper bound for in . In case where every is a projection, then is the projection onto the closure of the union of the ranges of . *

**Proof: **For every ,

,

and the left hand side is a bounded increasing net of real numbers; it therefore has a limit , which we write as

.

Now if we define

.

Then , so it is a bounded sesquilinear form. It follows from a familiar consequence of Riesz theorem that there exists a such that for all . It easily follows that , and that for all . Moreover, if for all , then , for all , so . We therefore have that .

SOT convergence is trickier. Without loss of generality, let us assume that for all (if not, then just consider the net for some fixed index ).

Now , so .

,

where the inequality is justified by the following claim.

**Claim: ***For every positive operator , it holds that , in the sense that for all . *

**Proof of claim: **This follows immediately from the functional calculus, because for a bounded nonnegative function , it clearly holds that almost everywhere.

That concludes the proof of the claim, so we have that in the SOT.

The final assertion (regarding projections) is left as an exercise.

**Exercise J:** Prove that if is a bounded increasing net of projections, then is the projection onto the union of the ranges of the .

**Remark:** Here are another two ways to see that . First, recall that if and if is any operator, then . Indeed:

.

Now, we simply apply this to , and :

.

Alternatively,

.

But, by our characterization of the norm of a selfadjoint operator, together with the spectral mapping theorem

.

By the double commutant theorem, if , then there is a net such that .

**Technical problem:** Convergent nets may be unbounded.

**Technical solution: **Kaplansky’s density theorem below shows that one may assume that the net is bounded by .

**Exercise K: **Use Kaplansky’s density theorem (Theorem 14 below) to prove directly that is SOT closed (“directly” means without making use of the double commutant theorem).

**Proposition 12:** *Let be a convex set. Then, the WOT and SOT closures of coincide.*

**Remark:** Recall that in a normed space, a convex set is weakly closed if and only if it is closed in the norm, hence the weak and norm closures of a convex set are the same (short explanation: every weakly closed set is strongly closed, and conversely, if a strongly closed set is convex, then, by the Hahn-Banach theorem, is equal to the intersection of closed half spaces, hence it is weakly closed as the intersection of weakly closed sets).

**Proof of Proposition 12:** Clearly, . Let . To show that , we fix , and find such that for all – this will show that there is a point for every open set of the form , whence .

Assume that , otherwise repeat the trick from the proof of the double commutant theorem. Write . Now, , so it follows that

,

where the last equality follows from the remark preceding the proof. Thus, by definition, there is some such that , as required.

From general functional analysis (in a locally convex topological vector space, a linear functional is continuous if and only if its kernel is closed) we obtain:

**Corollary 13:** *The SOT and the WOT have the same continuous linear functionals. *

**Theorem 14 (Kaplansky’s density theorem): ***Let be a non-degenerate *-subalgebra of . Then *

*(i.e., the closed unit ball of is SOT dense in the closed unit ball of ).**.**.**If is a unital C*-algebra, then .*

**Proof:** The heart of the matter is to prove item 2: that is, to prove . This done, the rest of the assertions follow by various tricks or by adapting the arguments. For example, suppose that we know that holds for every . To prove that every is the strong limit of a net of elements , we define

.

Then we know that there is some bounded net of selfadjoints

converging to , and it follows that and that .

Thus we turn to the heart of the matter, which is proving that . We may assume that is closed in the norm. Using Proposition 12, it is easy to prove that (Hey there! If you don’t see why Proposition 12 is needed, then make a note to return to this point at the end of the proof) . We therefore assume that and , and we will find a net of selfadjoint contractions in that converges to .

By the double commutant theorem, there is a net such that in the SOT, and therefore WOT. Since the adjoint is WOT continuous, in the WOT, and therefore weakly. The elements of this sequence are selfadjoint, thus,

,

where the last equality follows from Proposition 12. It follows that is the SOT limit of a net in , say . The net might still be missing the crucial property . To fix this, we need the following technical lemma.

**Lemma:** *For every , and every net , if , then . *

The lemma will be proved below; in the meanwhile, let be defined by if is in the interval , and otherwise . Then by the functional calculus, are selfadjoint contractions, and . By the lemma, on the other hand, . This shows that , and the proof of assertion 2 is complete.

**Exercise L:** Complete the proof of Theorem 14.

**Proof of the lemma:** For the span of this proof, let us agree that all functions are real valued. Let us say that a function is * strongly continuous* if implies for every net of selfadjoint operators . Let us define

is strongly continuous .

If and , then , because multiplication is SOT continuous on . In Exercise K below, you will be asked to prove that if is a sequence of strongly continuous functions, and if converge uniformly to , then is also strongly continuous. It follows that is a norm closed subalgebra of . Thus, to prove that , all that we have to do is to find enough functions in to separate points, and then apply the (locally compact version of the) Stone-Weierstrass theorem.

We will finish the proof by showing that and that are in . Since , and using our observation above on the product of strongly continuous functions, we just have to deal with . If , then

.

Now let converge SOT to . Then for every ,

,

and both summands on the RHS tend to zero. For example, to see that

,

we set , and then we obtain that because SOT. But then , so the whole expression converges to .

**Exercise M:** Prove that if is a sequence of strongly continuous functions, and if converge uniformly to , then is also strongly continuous.

**Corollary 15: ***A non-degenerate *-algebra is a von Neumann algebra if and only if the unit ball of is WOT compact. *

**Definitions: **A family of operators is said to be * topologically irreducible *when and are the only closed subspaces invariant under the action of all . The family is said to be

**Exercise N:** Let . Prove that is topologically irreducible if and only if (equivalently, if and only if ).

**Exercise O: **Prove that a C*-algebra is topologically irreducible if and only if it is algebraically irreducible.

Thanks to the result of the above exercise, neither the terminology “topologically irreducible” nor “algebraically irreducible” is used; a C*-algebra satisfying either requirement is simply said to be * irreducible*.

**Exercise P:** Generalize the result in Exercise O in the following way: if a C*-algebra is irreducible, then for any and there exists such that for all . Moreover if there exists a selfadjoint (respectively, unitary) operator such that for all , then there is a selfadjoint (respectively, unitary) operator such that for all .

**Exercise O: **Prove Corollary 15.

]]>

Spoiler alert: If you are a student in the course and you plan to submit the solution of this exercise, then you shouldn’t read the rest of this post.

For every , we define the * numerical range *of to be the set

,

and we define the * numerical radius * to be

.

Recall that the operator norm is given by .

**From here onwards, let us fix a selfadjoint operator .** If , then we have

while, on the other hand, using , we get

,

so is a real number. Thus, * the numerical range of a selfadjoint operator* is real : .

Now, set

and

.

**Lemma 1:** *. *

**Proof:** Let . Then, on the one hand,

,

while on the other hand,

for some .

So, where . In other words, is bounded below. In particular, .

On the other hand, if , then too, so by the same argument, is also bounded below. Thus . Since is bounded below, it follows that , so is invertible, and .

**Lemma 2:** (this was phrased in Exercise B as ).

**Proof: **By the Cauchy-Schwarz inequality,

,

so , and this holds for any operator. Now, since our is also assumed selfadjoint, we will be able to get equality. For this, we first require the following fact:

**Claim: ***For every operator , *

is a projection onto a finite dimensional subspace .

**Proof of claim:** Suppose that , and let , and for peace of mind assume also that . Let , , such that . Now let be a normalized vector in the direction of , thus .

Now, if is the orthogonal projection onto the space spanned by and , then , and so

.

This proves the claim. To complete the proof of the lemma, we take an orthogonal projection onto a finite dimensional subspace . Then can be considered as a bounded operator on , and , so it is selfadjoint. By the spectral theorem for selfadjoint operators on a finite dimensional vector space (over , in this case), we have that is diagonalizable, and in this case it is easy to see that is equal to the modulus of the eigenvalue of which has maximal modulus. Moreover, , where is a unit eigenvector corresponding to . (Oh do I have to chew it? Well, if corresponds to eigenvalue , then , and the maximal value of this expression for all occurs when and the rest are zero). Therefore

.

Applying the claim, this proves the lemma.

**Lemma 3: **.

**Proof: **Assume, without loss of generality, that , so that . Let be a sequence of unit vectors such that . Then

.

This shows that is not bounded below, so . To show that is also in , we consider the operator . Then and . So . Now, by a very similar argument to that given above, we find that . Therefore .

**Conclusion: **Putting Lemmas 1, 2 and 3 together, we obtain the following important characterization of the operator of a selfadjoint operator in terms of its spectrum:

**(*)** .

]]>

Perhaps we cannot start a course on von Neumann algebras, without making a few historical notes about the beginning of the theory.

(To say it more honestly and openly, what I wanted to say is that perhaps I cannot teach a course on von Neumann algebras without finally reading the classical works by von Neumann and also learning a bit about the man. von Neumann was a true genius and has contributed all over mathematics, see the Wikipedia article).

In the late 1920s, Hilbert, prompted by the latest developments in quantum mechanics, was running a seminar with his assistants Nordheim and von Neumann, trying to make sense of it all. The issue was that Heisenberg, Born and Jordan (who were at Gottingen at the time too) have recently introduced “matrix mechanics”, a mathematical formalism for quantum mechanics which involved infinite matrices – the eigenvalues of which were supposed to represent observable quantities of physical significance. At the time, the spectral theory of compact operators on Hilbert space was well understood (due to Hilbert’s previous work on integral equations – which were also inspired by problems in physics), but the infinite matrices arising in matrix mechanics were not bounded.

Hilbert, Nordheim and von Neumann quickly wrote a paper on the subject, but only in von Neumann’s subsequent work, published in the years 1927-1929, were the mathematical foundations for quantum mechanics crystallized. His treatment appeared in his 1932 monograph Mathematical Foundations of Quantum Mechanics; this account of the basic formalism of quantum mechanics was so definitive, that this is more or less the formalism that is still taught today (and we should note that his contemporaries, most notably Weyl and Dirac, also published their own closely related accounts; but each of them is the main character of a different story).

In that short period von Neumann defined Hilbert spaces (which were already “around”) and developed the spectral theorem for bounded and unbounded self adjoint operators, and many of its applications (e.g., the functional calculus and the Stone-von Neumann theorem). After this fantastic success von Neumann was led to take a deeper look into operators on Hilbert space. His vision penetrating into the depths, he saw the beauty and richness, and took upon himself the construction of the foundations of the theory of operator algebras (in part jointly with Murray). Four of the foundational papers on operator algebras were a series of papers named “On Rings of Operators I-IV”. In the introduction to the first one, von Neumann lists four reasons to tackle the problems in operator algebras which they treat:

First, the formal calculus with operator-rings leads to them. Second, our attempts to generalize the theory of unitary group-representations essentially beyond their classical frame have always been blocked by the unsolved questions connected with these problems. Third, various aspects of the quantum mechanical formalism suggest strongly the elucidation of this subject. Fourth, the knowledge obtained in these investigations gives an approach to a class of abstract algebras without a finite basis, which seems to differ essentially from all types hitherto investigated.

Von Neumann continued to work on quantum mechanics, and some of his ideas and the theory of operator algebra had influence on further developments in algebraic quantum field theory and quantum statistical mechanics (as far as I gather, it has turned out that some of the motivations for developing aspects of the theory have turned out to be misguided from a physical point of view). But among his many other interests and activities, he also continued to develop the theory of operator algebras (or “rings of operators” as he called them) as a piece of pure mathematics. Indeed, the “pureness” of von Neumann’s motivations is evident from the introduction to already the first “Rings of Operators”, and it seems to me that “differ essentially from all types hitherto investigated” is the reason that appealed to them most. After his earlier developments in operator theory, it took them roughly five years (1930-1935) to understand the basic theory of von Neumann algebras (it then took another roughly ten years to have it polished and written, but it is clear that when writing first “Rings of” paper, von Neumann *knew* the result that would appear in his 1949 paper “On Rings of Operators. Reduction Theory”. Let us not forget that there was a war in the middle of all the dramatic developments in operator algebras). The subject which has grown to become what is now known “von Neumann algebras” has expanded exponentially since the 30s; the core and foundations of the subject – a sizable part of the material course – are all due to the early papers of von Neumann and Murray. Having learned this stuff from textbooks written many years later, it is humbling, inspiring and almost unbelievable to see how much was already there in the first papers.

Will now leave all discussions of historical background and connections to physics, and dive into pure, cold, mathematics. The development of the material will be, as usual in mathematics, only loosely connected to the historical development. One small remark for the reader who has already mastered this theory:

**Remark:** It is customary to prove the spectral theorem for normal bounded operators via Gelfand’s theory of commutative Banach and C*-algebras; this is a good example of teaching things not the way they historically happened, as Gelfand’s theory came about a decade after von Neumann’s spectral theorem (later thirties versus later twenties). This is also how I learned it. I took it as a small challenge to “unlearn” Gelfand theory and prove the spectral theorem without it, in order to reach the subject matter in the shortest path possible.

We start with an overview of the subject, and a sketchy description of what we hope to achieve in this course. Deeper discussions will come later.

We let denote the algebra of all bounded operators on a (complex) Hilbert space , equipped with the usual algebraic operations (including conjugation where for all ) and the **operator norm**

.

The adjoint and norm are related by the “C*-identity”, which is of key importance:

**(C*)** .

**Exercise A: **In case you never have, prove the C*-identity.

We let or or denote the identity operator on .

**Definition: **A * (concrete) C*-algebra* is a subalgebra such that

- is a -algebra (if then ).
- is norm closed (if then then ).

Here, means .

A C*-algebra is said to be a * von Neumann algebra* if and if whenever and if then . Here, means that for all ; in this case we say that converges to in the

In short, a C*-algebra is a closed -subalgebra of which is closed in the norm, and a von Neumann algebra is a C*-algebra that contains the identity and is also closed in the strong operator topology.

There is another way to define von Neumann algebras. Given a set , we define * the commutant of * (denoted ) to be

for all .

If then it is easy to see that is a von Neumann algebra. By the next lecture, we will be able to prove the following: every von Neumann algebra arises as the commutant , where is a group and is unitary representation, i.e., a homomorphism from into the group of unitaries on . Thus, one may think of a von Neumann algebra as the algebra of all “symmetries” of some unitary representation.

In this course we will study the basic theory of von Neumann algebras. The first dividend of this theory is that is serves as a useful framework for studying operators on Hilbert space. Thus, our first task is to understand the C*- and von Neumann algebras that are generated by a single selfadjoint operator on ; much of this will be accomplished already in the first lecture.

We will see that if is selfadjoint, then and where is a measure space (precise meaning of the symbols will be given later). In fact, every commutative von Neumann algebra is isomorphic to . First question: Given two von Neumann algebras and , when are they isomorphic? (in fact, there are at least two very natural notions of what “isomorphic” means, and we will have to be more precise about that). Second question: What other kinds of von Neumann algebras exist?

As a warm up, let us look at a baby example of the first question. The algebra acts on by multiplication: given ,

,

is a bounded operator, and . Likewise acts by multiplication on : given , it acts as a diagonal operator

, ,

and . These two algebras, and , are abelian von Neumann algebras (the fact that they are strongly closed requires proof; it’s worth remembering that there is no dominated convergence theorem for nets). Are they isomorphic?

They might look to you pretty much the same, or very different, depending on who you are. If you have no experience with such questions, then it is not clear how one may go about deciding this problem. Perhaps a healthy intuition will say that they must be different, since they live on measure spaces of different natures. This will indeed solve the problem.

Here is one way to look at the problem. The algebra has projections which are supported on single points. These projections have the property, that there are no nonzero projections sitting under them. On the other hand, any projection in can be split into the sum of two smaller and nontrivial projections – this is because every set of nonzero measure can be split that way (the measure space has no atoms). It follows that the algebras cannot be -isomorphic, since the notions of projections, positivity, and hence order, are invariant under -isomorphisms.

In the setting of C*-algebras, projections are not always helpful, since there exist C*-algebras that have no nontrivial projections (can you think of an example?). But in a von Neumann algebra there is always a very rich supply of projections, and it turns out that the structure of the lattice of projections is the key to the main classification scheme of von Neumann algebras. We will spend a couple of weeks studying the lattice of projections in a von Neumann algebra.

As for the second question raised above (what other kinds of von Neumann algebras exist): it is clear that itself is a von Neumann algebra, for every Hilbert space . Of course, one can form direct sums, so there are von Neumann algebras of the form

.

The von Neumann algebras we listed are relatively simple examples of von Neumann algebras; we will later see that they all fall into one family, called * type I*. We will define later what it means to be type I; for now it suffices to say that type I algebras are either full matrix algebras of the kind , full operator algebras , commutative algebras of the kind , direct sums or tensor products of the above, or “continuous direct sums” of all the above (so called

There are other kinds von Neumann algebras, that are said to be of * type II*. Here is one way to construct such examples. Let be a countable group. Let be the space with orthonormal basis . For every , we define the (unitary) operator by

.

Clearly, , is a faithful (unitary) representation of . If we look at – the subalgebra of generated by , we get an algebra that in general “knows” some things about the group (though, it is in general not possible to recover from ). Define

(the strong operator closure). Then is a von Neumann algebra; it is called the * group von Neumann algebra of *. One of the problems we will study is: what can one learn about a group from its von Neumann algebra and vice versa. Another interesting thing to say, is that group von Neumann algebras give a class of examples of von Neumann algebras that we have not listed above. Not always is a type II algebra – for example, if is commutative then is commutative, so it is type I. But in certain cases (and one can give precise conditions) can be shown to be of a completely different nature than the type I examples, and is said to be of

While on the subject of group von Neumann algebras, let us mention a very big open problem:

**Open problem:** Let . Is it true or false that ?

This is a notoriously difficult problem, and the attention that it has drawn resulted in several of the major developments in von Neumann algebras, for example * free probability theory *(about which we will probably have no time to elaborate).

We will say something about the general classification scheme for von Neumann algebras. It turns out that there are three basic types of von Neumann algebras: types I and II, examples of which we mentioned above, and yet another type – * type III* – which is of quite a different nature (as hinted above, these types are defined in terms of the structure of the lattice of projections in them). Every von Neumann can be decomposed into a direct sum consisting of a type I, a type II and a type III von Neumann algebra. Classification of von Neumann algebras can be in principle reduced to the classification of “simple” von Neumann algebras, which are called

Every type I factor is of the form , and these algebras are completely classified by . McDuff showed that there are uncountably many type II factors (acting on a separable Hilbert space). The group algebras mentioned above are examples of factors of type II, and the open problem above suggests that classification of type II factors is beyond all hope. However, we will see that if are infinite dimensional factors () and if are both * amenable*, then .

At first Murray and von Neumann were not able to decide whether there do or do not exist factors of type III. Eventually, von Neumann constructed an example, and decades later Powers showed that there are uncountably many non-isomorphic type III factors (acting on a separable Hilbert space). The classification of so-called * amenable *type III factors was carried out mostly by Connes, a work for which he was awarded the fields medal (following work of Tomita-Takesai and others, and the classification was completed by Haagerup). We will not discuss this deep and difficult subject in this course, but I hope that we will at least see uncountably many non-isomorphic examples.

Just as one can form a von Neumann algebra that encodes some information about a group , one can form a von Neumann algebra (called the * crossed product*) that encodes the action of a group by measure preserving transformations on a measure space . We will discuss how the properties of the action are encoded in the crossed product . A relatively simple fact is that, under a certain “freeness” assumption, the action is

One final kind of problem that we will discuss will be very different than the kinds discussed in the last few paragraphs. The problems will be of the kind: what are the fundamental structural properties of von Neumann algebras? For example, von Neumann algebras are, in particular, C*-algebras. Not all C*-algebras are von Neumann algebras. What makes von algebras special? Do they have an abstract characterization? von Neumann algebras are also, in particular, Banach spaces. Do they happen to have some special properties, in terms of their Banach space structure? It turns out that they do: if is von Neumann algebra, then it turns out that there is a (unique!) Banach space such that (i.e., is the Banach space dual of the Banach space ), and existence of such a * pre-dual* characterizes the C*-algebras that “happen to be” von Neumann algebras.

That was a panoramic view of what we might hope to achieve in this course. But now we must start the course proper, and let us start from the very beginning.

We now recall some things that everyone who attended a first course in functional analysis (so everyone attending this course) should know. An operator is said to be

if ,**selfadjoint**if ,**normal**if ,**isometric**if it is a normal isometry,**unitary**- a
if and (in this case it is the orthogonal projection onto some subspace of ; in Hilbert spaces, we will use the word**projection****projection**for**orthogonal projections**), - a
if ,**contraction** if ; we then write .*positive*

Let us write for the projections on , for the unitaries on , for the positive elements, and for the selfadjoint elements. The notion of positivity induces an order on : we say that if .

For any , the * spectrum* of is the subset of the complex plane defined by

does **not** have a bounded inverse .

For every , is a closed set contained in . For selfadjoint operators, the non-emptiness of the spectrum is easier to establish than for general operators, and follows from the following facts:

Fix , and set and (it is easy to see that for ). Then

- ,
- ,
- .

In particular, the above facts can be put together to yield for a selfadjoint operator :

**(*)**

**Exercise B:** Prove the above 3 facts and equation (*) above (assuming that ). **Hint: **To solve the exercise, technology from a first course in functional analysis suffices; perhaps the most nontrivial part is , which can be reformulated as , where is the **numerical radius**. A direct proof can be found in many texts, for example Proposition 10.2.6 in my book. Alternatively, one can cleverly reduce to the case of compact selfadjoint operators.

Given any and any polynomial , the evaluation of in has an obvious meaning:

.

In particular, is a well defined selfadjoint operator if , i.e., if is a polynomial with real coefficients.

**Theorem 1 (spectral mapping theorem): ***Let and . Then*

*.*

**Proof:** Fix a non constant polynomial . For every , we can factor the polynomial . We therefore have

.

The left hand side is not invertible if and only if one of the factors is not invertible, which happens if and only if . Thus, if and only if there is some .

Given a topological space , let denote the algebra of complex valued continuous functions on , and let be the real valued continuous functions. We equip these algebras with the supremum norm, and this gives both these algebras the structure of a Banach algebra (i.e., a Banach space with a multiplication such that ). These algebras also carry a operation , and this makes them into “abstract” C*-algebras (i.e., Banach algebras satisfying the identity **(C*)**) .

The following theorem makes sense of “evaluating a continuous function at “.

**Theorem 2 (continuous functional calculus):*** Let , and let be the unital C*-algebra generated by :*

*.*

Then there exists an isomorphism such that

- for every .
- for every .
- .
- If and then .

**Remark:** The mapping is usually denoted simply . Note that for every , we have that . The map is referred to as * the continuous functional calculus. *The inverse mapping is called

**Proof of Theorem 2:** We consider first the * real* norm closed algebra

and show that there is an isomorphism . The map is clearly an algebraic homomorphism from into the real algebra , which consists of selfadjoint operators only. By Weierstrass’s polynomial approximation theorem, is dense in . So, to prove the existence of an isometric isomorphism, it suffices to show that for every . But by equation (*)** **and the spectral mapping theorem,

,

where we have used the fact that is selfadjoint (here we use that the coefficients are real). Thus extends to an algebra isomorphism satisfying items 1,2 and 3 in the the statement of the theorem (item 3 is satisfied in an empty way). To show that preserves order, it is enough to show that if , then . But if and is continuous on , then , so , as the square of a selfadjoint operator.

Now if , then for unique . Then we can define . This extends to a well defined homomorphism into , and it preserves positivity and the -operation. Finally, is isometric:

.

From the continuous functional calculus we will derive the spectral theorem below, but first a couple of quick corollaries.

**Corollary 3 (existence of a positive square root): ***Let . Then there exists a unique positive operator such that . In fact, .*

**Remark: **The operator is called * the positive square root * of , and is denoted or .

**Proof of Corollary 3:** With the notation of the functional calculus, we have that , where is the continuous function on given by . Then is the required square root (the function is just ; sorry for the pedantry!). The uniqueness is left as an exercise – you can find a solution at the end of this post.

The following exercise shows that C*-algebras are generated by their selfadjoint elements. It will also allow us later to extend theorems that we obtain for selfadjoint operators to theorems on normal operators (see Exercise K below).

**Exercise C:** Prove that for every operator in a C*-algebra , there exist two unique selfadjoint operators such that . Moreover, is normal if and only if (in this case we say that and * commute*).

**Exercise D:** Prove that every element in a C*-algebra is the linear combination of unitaries in . (Hint: use the continuous functional calculus and the previous exercise). In other words, every C*-algebras is generated (in fact, spanned) by its unitaries.

**Exercise E: **Prove that if is an isolated point in the spectrum of a selfadjoint operator , then is an eigenvalue (i.e., there exists a nonzero such that ).

To state another important decomposition theorem, we need a new definition.

**Definition: **An operator is said to be a * partial isometry* if the restricted operator is an isometry from onto . The space is called the

**Exercise F:** If is a partial isometry, then is the orthogonal projection onto the initial space of , and is the orthogonal projection onto the final space of .

**Exercise G: **For an operator , the following are equivalent.

- is a partial isometry.
- .
- is a projection.
- is a partial isometry.

**Corollary 4 (polar decomposition): ***Let . Then there exists a unique partial isometry with and a unique positive operator with such that . The operator is given by , and it is contained in .*

**Remark:** The operator is denoted is called the * absolute value* of . The decomposition is called

**Proof of Corollary 4:** Existence: Put . Then

.

In particular, . Moreover, the equality of norms implies that the map is a well defined isometric linear map from to . It therefore extends continuously to an isometry from to . Setting on completes the construction.

Uniqueness: the assumptions imply that so is the unique positive square root of . In the “existence” part of the proof we already noted that there is a unique partial isometry with initial space that maps .

The spectral theorem for selfadjoint operators is the basic structure theorem for selfadjoint operators. It tells us how a general selfadjoint operator looks like. Recall that if is a selfadjoint operator acting on a finite dimensional space , then is unitarily equivalent to a diagonal operator with real coefficients, that is, there exists a unitary operator such that

,

where (where some points in are possibly repeated).

Moreover, if is a compact selfadjoint operator on a Hilbert space, then unitarily equivalent to a diagonal operator (an infinite diagonal matrix, acting by multiplication on ), the diagonal of which corresponds to the eigenvalues of , which form a sequence converging to :

.

If is unitarily equivalent to a diagonal operator where the diagonal elements form a bounded sequence of real numbers (not necessarily converging to ), then is a bounded selfadjoint operator (which is not necessarily compact). However, a general bounded selfadjoint operator need not be unitarily equivalent to a diagonal operator.

**Example: **The operator given by is a selfadjoint bounded operator, and it is an easy exercise to see that this operator has no eigenvalues (so it cannot be unitarily equivalent to a diagonal operator). However, the operator in this example is rather well understood, and it is “sort of” diagonal. The general case is not significantly more complicated than this.

To understand general selfadjoint operators, one needs to recall the notions of measure space and of spaces. Let be a measure space and consider the Hilbert space . Every defines a (normal) bounded operator on .

**Exercise H: **In case you never have, prove the following facts (or look them up; Kadison-Ringrose have a nice treatment relevant to our setting). Let be a -finite measure space and .

- (where is the
of , which is defined to be ).**essential supremum** - .
- If and defines a bounded operator on , then is essentially bounded: .
- If , then and .
- is selfadjoint if and only if is real valued almost everywhere.

The algebra is an abstract C*-algebra with the usual algebraic operations, the -operation , and norm . The map

is a -representation (i.e., and algebraic homomorphism that preserves the adjoint ), which is isometric (), so omitting we can think of as a C*-subalgebra of . Since , the operator is selfadjoint if and only is a.e. real valued. The operator , where , is called a * multiplication operator. *Multiplication operators form a rich collection of examples of selfadjoint operators. The spectral theorem says that this collection exhausts all selfadjoint operators: every selfadjoint operator is unitarily equivalent to a multiplication operator.

**Theorem 5 (the spectral theorem):** *Let be a selfadjoint operator on a Hilbert space . Then there exists a measure space , a unitary operator , and a real valued , such that*

*. *

*When is separable, can be taken to be a locally compact Hausdorff space, and a regular Borel probability measure. *

We will prove the spectral theorem in the case that has a * cyclic vector*; the general case will then easily follow and will be left as an exercise.

**Definition: **Let . A vector is said to be a * cyclic vector *for if

.

**Exercise I: **Let . Prove that there exists a family of vectors such that

,

where ; in particular, for every , . In other words, every selfadjoint operator is the direct sum of operators that have a cyclic vector.

**Proof of the spectral theorem under the assumption that there exists a cyclic vector: **Suppose that is a cyclic unit vector for . Let . By the continuous functional calculus, there is an isometric -isomorphism which satisfies for every . Recall that we write for .

Define a linear functional by

.

Then is a positive linear functional on , and . By the Riesz representation theorem there exists a unique regular probability measure (defined on all Borel subsets of ) such that for all . This is the measure which that appears in the statement of the theorem.

Form . We define by first requiring that for all . Now, is a dense subspace of , and by the cyclicality assumption, is a dense subspace of . So if we will show that is isometric on , it will follow that extends to a unitary ; isometric-ness follows from:

,

for all .

Finally, let . Clearly, is a bounded real valued function. Then , while , so , and the proof is complete.

**Remark:** In the proof above, we constructed a measure by

(**) for all

where was assumed to be a cyclic vector for . In fact, the same construction makes very good sense also when is not necessarily cyclic. The measure is then sometimes referred to as * the spectral measure associated to (or ). *Warning: the term “spectral measure” will appear again below and will then mean something different. In any case, it is an instructive exercise to see what the measure looks like when is a selfadjoint matrix and is an arbitrary vector.

**Exercise J:** Show how the spectral theorem for general selfadjoint operators follows from the case where has a cyclic vector. Take care to establish also the final assertion of the theorem.

In Theorem 2, we saw that for a selfadjoint operator and a continuous function , one can define an operator . In fact,

.

The mapping is called the continuous functional calculus, and has some nice algebraic and analytic properties. In this section we will extend the functional calculus to all bounded Borel functions, that is we will show how to define whenever is a function defined on , that is Borel measurable. This assignment (called * the Borel functional calculus*) will have similar nice properties, with the main differences being (i) the map is not necessarily isometric, and (ii) will not necessarily lie in , but rather in (i.e., in the von Neumann algebra generated by ).

Let denote the algebra of all bounded Borel measurable functions on a compact space , equipped with the supremum norm and the adjoint operation .

**Theorem 6 (the Borel functional calculus):** *Let be a selfadjoint operator on a Hilbert space , and write . There exists a contractive -homomorphism into that extends the continuous functional calculus. If is a bounded sequence in that converges pointwise to , then in the strong operator topology. *

**Remark:** By the end of the next lecture, you will be able to establish that this -homomorphism is surjective, that is, that the von Neumann algebra generated by has the form (so you better be on the look out!).

**Proof of Theorem 6:** Given , the operator is defined to be , where is the unitary equivalence of with a multiplication operator. This makes sense, because being bounded and Borel measurable implies that . The only subtle point is to prove that . We will prove this for the case where has a cyclic vector. The case where is a general selfadjoint operator on a separable Hilbert space will be left as an exercise (easy, given the proof for the cyclic case); the case where is not even separable will be ignored.

Thus, let us assume that on , for the function , where is a regular Borel probability measure on . By a consequence of Lusin’s theorem, there is a bounded sequence of continuous functions that converge -almost everywhere to . By the dominated convergence theorem

,

so converge SOT to , thus .

A similar argument also shows the final assertion of the theorem.

Fix a selfadjoint operator . For the characteristic function of a Borel set we can define

.

Since and , the operator is a projection. The properties of the functional calculus also imply that

**Exercise K: **

- .
- and .
- for every disjoint family of sets , where the sum converges in the strong operator topology.

A projection valued map with the properties above is called a ** spectral measure. **The spectral measure constructed from the functional calculus above is called the

Sometimes, the spectral theorem is stated in terms of the spectral measure, rather than in terms of multiplication operators. One can show that for every bounded Borel function on , the functional calculus is given by “integration against the spectral measure”

,

where the integral converges in the following sense: for any , there is a partition of such that

for any choice of . (In fact, one can show that every spectral measure gives rise to a -homomorphism of by .) In particular, one has the formula

.

This implies, in particular, that every selfadjoint operator can be approximated in the norm by projections in the von Neumann algebra that it generates. Let us record this fact, and then give a more straightforward proof.

**Corollary 7:** *Every von Neumann algebra is equal to the norm closure of the linear span of projections in . In fact, every selfadjoint operator is in the norm closure of its spectral projections corresponding to intervals with rational endpoints. *

**Proof: **By Exercise C, it suffices to show that every selfadjoint operator can be approximated in the norm by projections in . Assume that , and let be a partition of . For every , and

,

so (functional calculus)

.

Summing, one has

.

and since the projections are orthogonal we obtain

for any .

The final statement of the theorem follows from the same argument.

**Remark:** The operator is in the norm closure of the spectral projections associated with it, but the spectral projections are (in general) not in the C*-algebra generated by .

The spectral theorem (Theorem 5) holds for normal operators in place of selfadjoint operators, with the difference that is complex valued rather than real valued. Thus, every normal operator is unitarily equivalent to a multiplication operator. One may repeat the proof above (there is one and a half places where this poses a nontrivial challenge – the trickiest part being the equation labelled (*)). Another option is to use the result for selfadjoint operators, together with Exercise C and the existence of a spectral measure, in order to construct a spectral measure that is supported on (a compact subset of) the complex plane .

When dealing with a normal operator , the spectrum is a subset of the complex plane, and one needs to use polynomials is and it conjugate; ordinary polynomials cannot approximate uniformly arbitrary continuous functions on . Likewise, the C*-algebra generated by is the closure of polynomials in and its adjoint. In accordance, the definition of a cyclic vector needs to be modified so that the proof runs smoothly: we say that a vector is **-cyclic for ** if

is a polynomial in 2 variables .

Then one can show that if is a Hilbert space and is a normal operator on , then decomposes into a direct sum of -cyclic subspaces. Then one proves that a normal operator with a -cyclic vector is unitarily equivalent to on , where . We leave the details as a significant exercise.

**Exercise L:** Show how to adjust the proof of the spectral theorem so that it works for normal operators; alternatively, deduce the spectral theorem for normal operators, from the spectral theorem for selfadjoint operators.

**Exercise M:** Prove that a selfadjoint (or normal, if you wish) operator is compact if and only if is a finite rank operator for every (here denotes the spectral measure associated with ).

**Exercise N: **Let be two cyclic selfadjoint operators. Then is unitarily equivalent to on , where is a (compactly supported) probability measure on . Prove that is unitarily equivalent to if and only if and are mutually absolutely continuous. The same result holds for -cyclic normal operators. (Hint: you may want to recall the Radon-Nikodym theorem).

]]>

A couple of years ago, after being inspired by lectures of Agler, Ball, McCarthy and Vinnikov on the subject, and after years of being influenced by Paul Muhly and Baruch Solel’s work, I realized that many of my different research projects (subproduct systems, the isomorphism problem, space of Dirichlet series with the complete Pick property, operator algebras associated with monomial ideals) are connected by the unifying theme of bounded analytic nc functions on subvarieties of the nc ball. “Realized” is a strong word, because many of my original ideas on this turned out to be false, and others I still don’t know how to prove. Anyway, it took me a couple of years and a lot of help, and here is this paper.

In short, we study algebras of bounded analytic functions on subvarieties of the the noncommutative (nc) unit ball :

tuples of matrices,

as well as bounded analytic functions that extend continuously to the “boundary”. We show that these algebras are multiplier algebras of appropriate nc reproducing kernel Hilbert spaces, and are completely isometrically isomorphic to the quotient of (the bounded nc analytic functions in the ball) by the ideal of nc functions vanishing on the variety. We classify these algebras in terms of the varieties, similar to classification results in the commutative case. We also identify previously studied algebras (such as multiplier algebras of complete Pick spaces and tensor algebras of subproduct systems) as algebras of bounded analytic functions on nc varieties. See the introduction for more.

We certainly plan to continue this line of research in the near future – in particular, the passage to other domains (beyond the ball), and the study of algebraic/bounded isomorphisms.

]]>

My book, **A First Course in Functional Analysis**, to be published with Chapman and Hall/CRC, will soon be out. There is already a cover, check it out on the CRC Press website.

This book is written to accompany an undergraduate course in functional analysis, where the course I had in mind is precisely the course that we give here at the Technion, with the same constraints. Constraint number 1: a course in measure theory is not mandatory in our undergraduate program. So how can one seriously teach functional analysis with significant applications? Well, one can, and I hope that this book proves that one can. I already wrote before, measure theory is not a must. Of course anyone going for a graduate degree in math should study measure theory (and get an A), but I’d like the students to be able to study functional analysis before that (so that they can do a masters degree in operator theory with me).

I believe that the readers will find many other original organizational contributions to the presentation of functional analysis in this book, but I leave them for you to discover. Instructors can request an e-copy for inspection (in the link to the publisher website above), friends and direct students can get a copy from me, and I hope that the rest of the world will recommend this book to their library (or wait for the libgen version).

]]>