### Introduction to von Neumann algebras, Lecture 1 (Introduction to the course, and a crash course in operator algebras, the spectral theorem)

#### by Orr Shalit

#### 1. Micro prologue

Perhaps we cannot start a course on von Neumann algebras, without making a few historical notes about the beginning of the theory.

(To say it more honestly and openly, what I wanted to say is that perhaps I cannot teach a course on von Neumann algebras without finally reading the classical works by von Neumann and also learning a bit about the man. von Neumann was a true genius and has contributed all over mathematics, see the Wikipedia article).

In the late 1920s, Hilbert, prompted by the latest developments in quantum mechanics, was running a seminar with his assistants Nordheim and von Neumann, trying to make sense of it all. The issue was that Heisenberg, Born and Jordan (who were at Gottingen at the time too) have recently introduced “matrix mechanics”, a mathematical formalism for quantum mechanics which involved infinite matrices – the eigenvalues of which were supposed to represent observable quantities of physical significance. At the time, the spectral theory of compact operators on Hilbert space was well understood (due to Hilbert’s previous work on integral equations – which were also inspired by problems in physics), but the infinite matrices arising in matrix mechanics were not bounded.

Hilbert, Nordheim and von Neumann quickly wrote a paper on the subject, but only in von Neumann’s subsequent work, published in the years 1927-1929, were the mathematical foundations for quantum mechanics crystallized. His treatment appeared in his 1932 monograph Mathematical Foundations of Quantum Mechanics; this account of the basic formalism of quantum mechanics was so definitive, that this is more or less the formalism that is still taught today (and we should note that his contemporaries, most notably Weyl and Dirac, also published their own closely related accounts; but each of them is the main character of a different story).

In that short period von Neumann defined Hilbert spaces (which were already “around”) and developed the spectral theorem for bounded and unbounded self adjoint operators, and many of its applications (e.g., the functional calculus and the Stone-von Neumann theorem). After this fantastic success von Neumann was led to take a deeper look into operators on Hilbert space. His vision penetrating into the depths, he saw the beauty and richness, and took upon himself the construction of the foundations of the theory of operator algebras (in part jointly with Murray). Four of the foundational papers on operator algebras were a series of papers named “On Rings of Operators I-IV”. In the introduction to the first one, von Neumann lists four reasons to tackle the problems in operator algebras which they treat:

First, the formal calculus with operator-rings leads to them. Second, our attempts to generalize the theory of unitary group-representations essentially beyond their classical frame have always been blocked by the unsolved questions connected with these problems. Third, various aspects of the quantum mechanical formalism suggest strongly the elucidation of this subject. Fourth, the knowledge obtained in these investigations gives an approach to a class of abstract algebras without a finite basis, which seems to differ essentially from all types hitherto investigated.

Von Neumann continued to work on quantum mechanics, and some of his ideas and the theory of operator algebra had influence on further developments in algebraic quantum field theory and quantum statistical mechanics (as far as I gather, it has turned out that some of the motivations for developing aspects of the theory have turned out to be misguided from a physical point of view). But among his many other interests and activities, he also continued to develop the theory of operator algebras (or “rings of operators” as he called them) as a piece of pure mathematics. Indeed, the “pureness” of von Neumann’s motivations is evident from the introduction to already the first “Rings of Operators”, and it seems to me that “differ essentially from all types hitherto investigated” is the reason that appealed to them most. After his earlier developments in operator theory, it took them roughly five years (1930-1935) to understand the basic theory of von Neumann algebras (it then took another roughly ten years to have it polished and written, but it is clear that when writing first “Rings of” paper, von Neumann *knew* the result that would appear in his 1949 paper “On Rings of Operators. Reduction Theory”. Let us not forget that there was a war in the middle of all the dramatic developments in operator algebras). The subject which has grown to become what is now known “von Neumann algebras” has expanded exponentially since the 30s; the core and foundations of the subject – a sizable part of the material course – are all due to the early papers of von Neumann and Murray. Having learned this stuff from textbooks written many years later, it is humbling, inspiring and almost unbelievable to see how much was already there in the first papers.

Will now leave all discussions of historical background and connections to physics, and dive into pure, cold, mathematics. The development of the material will be, as usual in mathematics, only loosely connected to the historical development. One small remark for the reader who has already mastered this theory:

**Remark:** It is customary to prove the spectral theorem for normal bounded operators via Gelfand’s theory of commutative Banach and C*-algebras; this is a good example of teaching things not the way they historically happened, as Gelfand’s theory came about a decade after von Neumann’s spectral theorem (later thirties versus later twenties). This is also how I learned it. I took it as a small challenge to “unlearn” Gelfand theory and prove the spectral theorem without it, in order to reach the subject matter in the shortest path possible.

#### 2. Introduction

We start with an overview of the subject, and a sketchy description of what we hope to achieve in this course. Deeper discussions will come later.

We let denote the algebra of all bounded operators on a (complex) Hilbert space , equipped with the usual algebraic operations (including conjugation where for all ) and the **operator norm**

.

The adjoint and norm are related by the “C*-identity”, which is of key importance:

**(C*)** .

**Exercise A: **In case you never have, prove the C*-identity.

We let or or denote the identity operator on .

**Definition: **A * (concrete) C*-algebra* is a subalgebra such that

- is a -algebra (if then ).
- is norm closed (if then then ).

Here, means .

A C*-algebra is said to be a * von Neumann algebra* if and if whenever and if then . Here, means that for all ; in this case we say that converges to in the

*. (When we write , it is to be understood that is a convergent*

**strong operator topology***. For the purposes of this introduction, the reader can think of as a convergent sequence of operators, but please refresh your memory regarding the notion of nets in topological spaces for later lectures.)*

**net**In short, a C*-algebra is a closed -subalgebra of which is closed in the norm, and a von Neumann algebra is a C*-algebra that contains the identity and is also closed in the strong operator topology.

There is another way to define von Neumann algebras. Given a set , we define * the commutant of * (denoted ) to be

for all .

If then it is easy to see that is a von Neumann algebra. By the next lecture, we will be able to prove the following: every von Neumann algebra arises as the commutant , where is a group and is unitary representation, i.e., a homomorphism from into the group of unitaries on . Thus, one may think of a von Neumann algebra as the algebra of all “symmetries” of some unitary representation.

In this course we will study the basic theory of von Neumann algebras. The first dividend of this theory is that is serves as a useful framework for studying operators on Hilbert space. Thus, our first task is to understand the C*- and von Neumann algebras that are generated by a single selfadjoint operator on ; much of this will be accomplished already in the first lecture.

We will see that if is selfadjoint, then and where is a measure space (precise meaning of the symbols will be given later). In fact, every commutative von Neumann algebra is isomorphic to . First question: Given two von Neumann algebras and , when are they isomorphic? (in fact, there are at least two very natural notions of what “isomorphic” means, and we will have to be more precise about that). Second question: What other kinds of von Neumann algebras exist?

As a warm up, let us look at a baby example of the first question. The algebra acts on by multiplication: given ,

,

is a bounded operator, and . Likewise acts by multiplication on : given , it acts as a diagonal operator

, ,

and . These two algebras, and , are abelian von Neumann algebras (the fact that they are strongly closed requires proof; it’s worth remembering that there is no dominated convergence theorem for nets). Are they isomorphic?

They might look to you pretty much the same, or very different, depending on who you are. If you have no experience with such questions, then it is not clear how one may go about deciding this problem. Perhaps a healthy intuition will say that they must be different, since they live on measure spaces of different natures. This will indeed solve the problem.

Here is one way to look at the problem. The algebra has projections which are supported on single points. These projections have the property, that there are no nonzero projections sitting under them. On the other hand, any projection in can be split into the sum of two smaller and nontrivial projections – this is because every set of nonzero measure can be split that way (the measure space has no atoms). It follows that the algebras cannot be -isomorphic, since the notions of projections, positivity, and hence order, are invariant under -isomorphisms.

In the setting of C*-algebras, projections are not always helpful, since there exist C*-algebras that have no nontrivial projections (can you think of an example?). But in a von Neumann algebra there is always a very rich supply of projections, and it turns out that the structure of the lattice of projections is the key to the main classification scheme of von Neumann algebras. We will spend a couple of weeks studying the lattice of projections in a von Neumann algebra.

As for the second question raised above (what other kinds of von Neumann algebras exist): it is clear that itself is a von Neumann algebra, for every Hilbert space . Of course, one can form direct sums, so there are von Neumann algebras of the form

.

The von Neumann algebras we listed are relatively simple examples of von Neumann algebras; we will later see that they all fall into one family, called * type I*. We will define later what it means to be type I; for now it suffices to say that type I algebras are either full matrix algebras of the kind , full operator algebras , commutative algebras of the kind , direct sums or tensor products of the above, or “continuous direct sums” of all the above (so called

*). In principle, one is able to classify all type I algebras acting on a separable Hilbert space in relatively simple terms.*

**direct integrals**There are other kinds von Neumann algebras, that are said to be of * type II*. Here is one way to construct such examples. Let be a countable group. Let be the space with orthonormal basis . For every , we define the (unitary) operator by

.

Clearly, , is a faithful (unitary) representation of . If we look at – the subalgebra of generated by , we get an algebra that in general “knows” some things about the group (though, it is in general not possible to recover from ). Define

(the strong operator closure). Then is a von Neumann algebra; it is called the * group von Neumann algebra of *. One of the problems we will study is: what can one learn about a group from its von Neumann algebra and vice versa. Another interesting thing to say, is that group von Neumann algebras give a class of examples of von Neumann algebras that we have not listed above. Not always is a type II algebra – for example, if is commutative then is commutative, so it is type I. But in certain cases (and one can give precise conditions) can be shown to be of a completely different nature than the type I examples, and is said to be of

*. For example, if is the free group generated by generators, then is of type II.*

**type II**While on the subject of group von Neumann algebras, let us mention a very big open problem:

**Open problem:** Let . Is it true or false that ?

This is a notoriously difficult problem, and the attention that it has drawn resulted in several of the major developments in von Neumann algebras, for example * free probability theory *(about which we will probably have no time to elaborate).

We will say something about the general classification scheme for von Neumann algebras. It turns out that there are three basic types of von Neumann algebras: types I and II, examples of which we mentioned above, and yet another type – * type III* – which is of quite a different nature (as hinted above, these types are defined in terms of the structure of the lattice of projections in them). Every von Neumann can be decomposed into a direct sum consisting of a type I, a type II and a type III von Neumann algebra. Classification of von Neumann algebras can be in principle reduced to the classification of “simple” von Neumann algebras, which are called

*, which are the “building blocks” of general von Neumann algebras.*

**factors**Every type I factor is of the form , and these algebras are completely classified by . McDuff showed that there are uncountably many type II factors (acting on a separable Hilbert space). The group algebras mentioned above are examples of factors of type II, and the open problem above suggests that classification of type II factors is beyond all hope. However, we will see that if are infinite dimensional factors () and if are both * amenable*, then .

At first Murray and von Neumann were not able to decide whether there do or do not exist factors of type III. Eventually, von Neumann constructed an example, and decades later Powers showed that there are uncountably many non-isomorphic type III factors (acting on a separable Hilbert space). The classification of so-called * amenable *type III factors was carried out mostly by Connes, a work for which he was awarded the fields medal (following work of Tomita-Takesai and others, and the classification was completed by Haagerup). We will not discuss this deep and difficult subject in this course, but I hope that we will at least see uncountably many non-isomorphic examples.

Just as one can form a von Neumann algebra that encodes some information about a group , one can form a von Neumann algebra (called the * crossed product*) that encodes the action of a group by measure preserving transformations on a measure space . We will discuss how the properties of the action are encoded in the crossed product . A relatively simple fact is that, under a certain “freeness” assumption, the action is

*if and only if the crossed product is*

**ergodic***(in which case, it is a type II factor). In the last two decades, the classification problem for type II factors arising this way has been studied in depth by Popa and others, and there are some celebrated results. The tip of the iceberg is this: under certain assumptions on the action, Popa has shown that if two crossed products are isomorphic, then the actions are (essentially) conjugate. This result is quite surprising (the converse direction is quite trivial), and the proofs are highly nontrivial, and will remain unfortunately beyond the scope of our course (however, hopefully by the end of the course a student will be in a position to approach the literature on the subject).*

**factor**One final kind of problem that we will discuss will be very different than the kinds discussed in the last few paragraphs. The problems will be of the kind: what are the fundamental structural properties of von Neumann algebras? For example, von Neumann algebras are, in particular, C*-algebras. Not all C*-algebras are von Neumann algebras. What makes von algebras special? Do they have an abstract characterization? von Neumann algebras are also, in particular, Banach spaces. Do they happen to have some special properties, in terms of their Banach space structure? It turns out that they do: if is von Neumann algebra, then it turns out that there is a (unique!) Banach space such that (i.e., is the Banach space dual of the Banach space ), and existence of such a * pre-dual* characterizes the C*-algebras that “happen to be” von Neumann algebras.

That was a panoramic view of what we might hope to achieve in this course. But now we must start the course proper, and let us start from the very beginning.

#### 3. A bit of operator theory on

We now recall some things that everyone who attended a first course in functional analysis (so everyone attending this course) should know. An operator is said to be

if ,**selfadjoint**if ,**normal**if ,**isometric**if it is a normal isometry,**unitary**- a
if and (in this case it is the orthogonal projection onto some subspace of ; in Hilbert spaces, we will use the word**projection****projection**for**orthogonal projections**), - a
if ,**contraction** if ; we then write .*positive*

Let us write for the projections on , for the unitaries on , for the positive elements, and for the selfadjoint elements. The notion of positivity induces an order on : we say that if .

For any , the * spectrum* of is the subset of the complex plane defined by

does **not** have a bounded inverse .

For every , is a closed set contained in . For selfadjoint operators, the non-emptiness of the spectrum is easier to establish than for general operators, and follows from the following facts:

Fix , and set and (it is easy to see that for ). Then

- ,
- ,
- .

In particular, the above facts can be put together to yield for a selfadjoint operator :

**(*)**

**Exercise B:** Prove the above 3 facts and equation (*) above (assuming that ). **Hint: **To solve the exercise, technology from a first course in functional analysis suffices; perhaps the most nontrivial part is , which can be reformulated as , where is the **numerical radius**. A direct proof can be found in many texts, for example Proposition 10.2.6 in my book. Alternatively, one can cleverly reduce to the case of compact selfadjoint operators.

Given any and any polynomial , the evaluation of in has an obvious meaning:

.

In particular, is a well defined selfadjoint operator if , i.e., if is a polynomial with real coefficients.

**Theorem 1 (spectral mapping theorem): ***Let and . Then*

*.*

**Proof:** Fix a non constant polynomial . For every , we can factor the polynomial . We therefore have

.

The left hand side is not invertible if and only if one of the factors is not invertible, which happens if and only if . Thus, if and only if there is some .

Given a topological space , let denote the algebra of complex valued continuous functions on , and let be the real valued continuous functions. We equip these algebras with the supremum norm, and this gives both these algebras the structure of a Banach algebra (i.e., a Banach space with a multiplication such that ). These algebras also carry a operation , and this makes them into “abstract” C*-algebras (i.e., Banach algebras satisfying the identity **(C*)**) .

The following theorem makes sense of “evaluating a continuous function at “.

**Theorem 2 (continuous functional calculus):*** Let , and let be the unital C*-algebra generated by :*

*.*

Then there exists an isomorphism such that

- for every .
- for every .
- .
- If and then .

**Remark:** The mapping is usually denoted simply . Note that for every , we have that . The map is referred to as * the continuous functional calculus. *The inverse mapping is called

*.*

**the Gelfand transform****Proof of Theorem 2:** We consider first the * real* norm closed algebra

and show that there is an isomorphism . The map is clearly an algebraic homomorphism from into the real algebra , which consists of selfadjoint operators only. By Weierstrass’s polynomial approximation theorem, is dense in . So, to prove the existence of an isometric isomorphism, it suffices to show that for every . But by equation (*)** **and the spectral mapping theorem,

,

where we have used the fact that is selfadjoint (here we use that the coefficients are real). Thus extends to an algebra isomorphism satisfying items 1,2 and 3 in the the statement of the theorem (item 3 is satisfied in an empty way). To show that preserves order, it is enough to show that if , then . But if and is continuous on , then , so , as the square of a selfadjoint operator.

Now if , then for unique . Then we can define . This extends to a well defined homomorphism into , and it preserves positivity and the -operation. Finally, is isometric:

.

From the continuous functional calculus we will derive the spectral theorem below, but first a couple of quick corollaries.

**Corollary 3 (existence of a positive square root): ***Let . Then there exists a unique positive operator such that . In fact, .*

**Remark: **The operator is called * the positive square root * of , and is denoted or .

**Proof of Corollary 3:** With the notation of the functional calculus, we have that , where is the continuous function on given by . Then is the required square root (the function is just ; sorry for the pedantry!). The uniqueness is left as an exercise – you can find a solution at the end of this post.

The following exercise shows that C*-algebras are generated by their selfadjoint elements. It will also allow us later to extend theorems that we obtain for selfadjoint operators to theorems on normal operators (see Exercise K below).

**Exercise C:** Prove that for every operator in a C*-algebra , there exist two unique selfadjoint operators such that . Moreover, is normal if and only if (in this case we say that and * commute*).

**Exercise D:** Prove that every element in a C*-algebra is the linear combination of unitaries in . (Hint: use the continuous functional calculus and the previous exercise). In other words, every C*-algebras is generated (in fact, spanned) by its unitaries.

**Exercise E: **Prove that if is an isolated point in the spectrum of a selfadjoint operator , then is an eigenvalue (i.e., there exists a nonzero such that ).

To state another important decomposition theorem, we need a new definition.

**Definition: **An operator is said to be a * partial isometry* if the restricted operator is an isometry from onto . The space is called the

*of and the space is called the*

**initial space***of .*

**final space****Exercise F:** If is a partial isometry, then is the orthogonal projection onto the initial space of , and is the orthogonal projection onto the final space of .

**Exercise G: **For an operator , the following are equivalent.

- is a partial isometry.
- .
- is a projection.
- is a partial isometry.

**Corollary 4 (polar decomposition): ***Let . Then there exists a unique partial isometry with and a unique positive operator with such that . The operator is given by , and it is contained in .*

**Remark:** The operator is denoted is called the * absolute value* of . The decomposition is called

*. We have noted that . As for , it is in general not contained in , but we shall see later that it is is always contained in the von Neumann algebra generated by .*

**the polar decomposition of****Proof of Corollary 4:** Existence: Put . Then

.

In particular, . Moreover, the equality of norms implies that the map is a well defined isometric linear map from to . It therefore extends continuously to an isometry from to . Setting on completes the construction.

Uniqueness: the assumptions imply that so is the unique positive square root of . In the “existence” part of the proof we already noted that there is a unique partial isometry with initial space that maps .

#### 4. The spectral theorem

The spectral theorem for selfadjoint operators is the basic structure theorem for selfadjoint operators. It tells us how a general selfadjoint operator looks like. Recall that if is a selfadjoint operator acting on a finite dimensional space , then is unitarily equivalent to a diagonal operator with real coefficients, that is, there exists a unitary operator such that

,

where (where some points in are possibly repeated).

Moreover, if is a compact selfadjoint operator on a Hilbert space, then unitarily equivalent to a diagonal operator (an infinite diagonal matrix, acting by multiplication on ), the diagonal of which corresponds to the eigenvalues of , which form a sequence converging to :

.

If is unitarily equivalent to a diagonal operator where the diagonal elements form a bounded sequence of real numbers (not necessarily converging to ), then is a bounded selfadjoint operator (which is not necessarily compact). However, a general bounded selfadjoint operator need not be unitarily equivalent to a diagonal operator.

**Example: **The operator given by is a selfadjoint bounded operator, and it is an easy exercise to see that this operator has no eigenvalues (so it cannot be unitarily equivalent to a diagonal operator). However, the operator in this example is rather well understood, and it is “sort of” diagonal. The general case is not significantly more complicated than this.

To understand general selfadjoint operators, one needs to recall the notions of measure space and of spaces. Let be a measure space and consider the Hilbert space . Every defines a (normal) bounded operator on .

**Exercise H: **In case you never have, prove the following facts (or look them up; Kadison-Ringrose have a nice treatment relevant to our setting). Let be a -finite measure space and .

- (where is the
of , which is defined to be ).**essential supremum** - .
- If and defines a bounded operator on , then is essentially bounded: .
- If , then and .
- is selfadjoint if and only if is real valued almost everywhere.

The algebra is an abstract C*-algebra with the usual algebraic operations, the -operation , and norm . The map

is a -representation (i.e., and algebraic homomorphism that preserves the adjoint ), which is isometric (), so omitting we can think of as a C*-subalgebra of . Since , the operator is selfadjoint if and only is a.e. real valued. The operator , where , is called a * multiplication operator. *Multiplication operators form a rich collection of examples of selfadjoint operators. The spectral theorem says that this collection exhausts all selfadjoint operators: every selfadjoint operator is unitarily equivalent to a multiplication operator.

**Theorem 5 (the spectral theorem):** *Let be a selfadjoint operator on a Hilbert space . Then there exists a measure space , a unitary operator , and a real valued , such that*

*. *

*When is separable, can be taken to be a locally compact Hausdorff space, and a regular Borel probability measure. *

We will prove the spectral theorem in the case that has a * cyclic vector*; the general case will then easily follow and will be left as an exercise.

**Definition: **Let . A vector is said to be a * cyclic vector *for if

.

**Exercise I: **Let . Prove that there exists a family of vectors such that

,

where ; in particular, for every , . In other words, every selfadjoint operator is the direct sum of operators that have a cyclic vector.

**Proof of the spectral theorem under the assumption that there exists a cyclic vector: **Suppose that is a cyclic unit vector for . Let . By the continuous functional calculus, there is an isometric -isomorphism which satisfies for every . Recall that we write for .

Define a linear functional by

.

Then is a positive linear functional on , and . By the Riesz representation theorem there exists a unique regular probability measure (defined on all Borel subsets of ) such that for all . This is the measure which that appears in the statement of the theorem.

Form . We define by first requiring that for all . Now, is a dense subspace of , and by the cyclicality assumption, is a dense subspace of . So if we will show that is isometric on , it will follow that extends to a unitary ; isometric-ness follows from:

,

for all .

Finally, let . Clearly, is a bounded real valued function. Then , while , so , and the proof is complete.

**Remark:** In the proof above, we constructed a measure by

(**) for all

where was assumed to be a cyclic vector for . In fact, the same construction makes very good sense also when is not necessarily cyclic. The measure is then sometimes referred to as * the spectral measure associated to (or ). *Warning: the term “spectral measure” will appear again below and will then mean something different. In any case, it is an instructive exercise to see what the measure looks like when is a selfadjoint matrix and is an arbitrary vector.

**Exercise J:** Show how the spectral theorem for general selfadjoint operators follows from the case where has a cyclic vector. Take care to establish also the final assertion of the theorem.

#### 5. The Borel functional calculus

In Theorem 2, we saw that for a selfadjoint operator and a continuous function , one can define an operator . In fact,

.

The mapping is called the continuous functional calculus, and has some nice algebraic and analytic properties. In this section we will extend the functional calculus to all bounded Borel functions, that is we will show how to define whenever is a function defined on , that is Borel measurable. This assignment (called * the Borel functional calculus*) will have similar nice properties, with the main differences being (i) the map is not necessarily isometric, and (ii) will not necessarily lie in , but rather in (i.e., in the von Neumann algebra generated by ).

Let denote the algebra of all bounded Borel measurable functions on a compact space , equipped with the supremum norm and the adjoint operation .

**Theorem 6 (the Borel functional calculus):** *Let be a selfadjoint operator on a Hilbert space , and write . There exists a contractive -homomorphism into that extends the continuous functional calculus. If is a bounded sequence in that converges pointwise to , then in the strong operator topology. *

**Remark:** By the end of the next lecture, you will be able to establish that this -homomorphism is surjective, that is, that the von Neumann algebra generated by has the form (so you better be on the look out!).

**Proof of Theorem 6:** Given , the operator is defined to be , where is the unitary equivalence of with a multiplication operator. This makes sense, because being bounded and Borel measurable implies that . The only subtle point is to prove that . We will prove this for the case where has a cyclic vector. The case where is a general selfadjoint operator on a separable Hilbert space will be left as an exercise (easy, given the proof for the cyclic case); the case where is not even separable will be ignored.

Thus, let us assume that on , for the function , where is a regular Borel probability measure on . By a consequence of Lusin’s theorem, there is a bounded sequence of continuous functions that converge -almost everywhere to . By the dominated convergence theorem

,

so converge SOT to , thus .

A similar argument also shows the final assertion of the theorem.

#### 6. The spectral measure

Fix a selfadjoint operator . For the characteristic function of a Borel set we can define

.

Since and , the operator is a projection. The properties of the functional calculus also imply that

**Exercise K: **

- .
- and .
- for every disjoint family of sets , where the sum converges in the strong operator topology.

A projection valued map with the properties above is called a ** spectral measure. **The spectral measure constructed from the functional calculus above is called the

**spectral measure associated with .**Sometimes, the spectral theorem is stated in terms of the spectral measure, rather than in terms of multiplication operators. One can show that for every bounded Borel function on , the functional calculus is given by “integration against the spectral measure”

,

where the integral converges in the following sense: for any , there is a partition of such that

for any choice of . (In fact, one can show that every spectral measure gives rise to a -homomorphism of by .) In particular, one has the formula

.

This implies, in particular, that every selfadjoint operator can be approximated in the norm by projections in the von Neumann algebra that it generates. Let us record this fact, and then give a more straightforward proof.

**Corollary 7:** *Every von Neumann algebra is equal to the norm closure of the linear span of projections in . In fact, every selfadjoint operator is in the norm closure of its spectral projections corresponding to intervals with rational endpoints. *

**Proof: **By Exercise C, it suffices to show that every selfadjoint operator can be approximated in the norm by projections in . Assume that , and let be a partition of . For every , and

,

so (functional calculus)

.

Summing, one has

.

and since the projections are orthogonal we obtain

for any .

The final statement of the theorem follows from the same argument.

**Remark:** The operator is in the norm closure of the spectral projections associated with it, but the spectral projections are (in general) not in the C*-algebra generated by .

#### 7. The spectral theorem for normal operators

The spectral theorem (Theorem 5) holds for normal operators in place of selfadjoint operators, with the difference that is complex valued rather than real valued. Thus, every normal operator is unitarily equivalent to a multiplication operator. One may repeat the proof above (there is one and a half places where this poses a nontrivial challenge – the trickiest part being the equation labelled (*)). Another option is to use the result for selfadjoint operators, together with Exercise C and the existence of a spectral measure, in order to construct a spectral measure that is supported on (a compact subset of) the complex plane .

When dealing with a normal operator , the spectrum is a subset of the complex plane, and one needs to use polynomials is and it conjugate; ordinary polynomials cannot approximate uniformly arbitrary continuous functions on . Likewise, the C*-algebra generated by is the closure of polynomials in and its adjoint. In accordance, the definition of a cyclic vector needs to be modified so that the proof runs smoothly: we say that a vector is **-cyclic for ** if

is a polynomial in 2 variables .

Then one can show that if is a Hilbert space and is a normal operator on , then decomposes into a direct sum of -cyclic subspaces. Then one proves that a normal operator with a -cyclic vector is unitarily equivalent to on , where . We leave the details as a significant exercise.

**Exercise L:** Show how to adjust the proof of the spectral theorem so that it works for normal operators; alternatively, deduce the spectral theorem for normal operators, from the spectral theorem for selfadjoint operators.

#### 8. Additional exercises

**Exercise M:** Prove that a selfadjoint (or normal, if you wish) operator is compact if and only if is a finite rank operator for every (here denotes the spectral measure associated with ).

**Exercise N: **Let be two cyclic selfadjoint operators. Then is unitarily equivalent to on , where is a (compactly supported) probability measure on . Prove that is unitarily equivalent to if and only if and are mutually absolutely continuous. The same result holds for -cyclic normal operators. (Hint: you may want to recall the Radon-Nikodym theorem).

[…] selfadjoint operator (which is significantly simpler than the one for normal operators). In the previous lecture, I stated Exercise B, which gave some important properties of the spectrum of a selfadjoint […]