Introduction to von Neumann algebras, Lecture 6 (tensor products of Hilbert spaces and vN algebras; the GNS representation, the hyperfinite II_1 factor)

by Orr Shalit

In this lecture we will introduce tensor products of Hilbert spaces. This construction is very useful for exhibiting various operators, and, in particular, it will enable us to introduce new von Neumann algebras. In particular, we will construct the so called hyperfinite II_1 factor.

1. Hausdorff completion and tensor products of Hilbert spaces

Let H and K be two Hilbert spaces. Our goal is to construct a new Hilbert space, formed from H and K, called the Hilbert space tensor product and denoted H \otimes K.

Definition 1: Let V be a vector space. A semi-inner product is a function \langle \cdot, \cdot \rangle : V \times V \to \mathbb{C} such that for all a \in \mathbb{C} and all v,u,w \in V:

  1. \langle v,v \rangle \geq 0,
  2. \langle a v + u, w \rangle = a\langle v, w \rangle + \langle u, w \rangle,
  3. \langle u,v \rangle = \overline{\langle v, u \rangle}.

Of course, if \langle v, v \rangle = 0 occurs only for v = 0, then \langle \cdot, \cdot \rangle is said to be an inner product.

Definition 2: Given a semi-inner product, we define the associated semi-norm \| \cdot \| by

\|v\| := \sqrt{\langle v,v \rangle}.

Exercise A: A semi-inner product satisfies the Cauchy-Schwarz inequality:

|\langle u,v \rangle| \leq \|u\|\|v\|.

Consequently, the semi-norm arising from a semi-inner product is really a semi-norm. It follows that N = \{v \in V : \|v\| = 0\} is a subspace, and that \langle v, u \rangle = 0 for all v \in V and u \in N. Therefore, on V/N we can define an inner product on it

\langle u + V, v + V \rangle = \langle u,v \rangle.

Finally, the inner product space V/N can be completed in a  unique way to form a Hilbert space H.

Definition 3: Given a semi-inner product on a vector space (V, \langle \cdot, \cdot \rangle), the Hilbert space H constructed above is called the Hausdorff completion of (V, \langle \cdot, \cdot \rangle).

Definition 4: Given Hilbert spaces H and K, let H * K denote the free vector space with basis \{h * k : h \in H, k \in K\}; that is, H*K is just the space of all finite (formal) linear combinations \sum c_i h_i * k_i. On H * K define a semi-inner product

\langle \sum_i c_i h_i * k_i , \sum_j c'_j h'_j * k'_j \rangle = \sum_{i,j} c_i \overline{c'_j} \langle h_i, h'_j\rangle_H \langle k_i, k'_j \rangle_K.

Exercise B: This is indeed a semi-inner product. (Hint: The only thing that requires proof is positive semi-definiteness. You can find a proof in all kinds of books, e.g. Takesaki. But I think the following might be an elegant approach: Given two finite dimensional Hilbert spaces H= \mathbb{C}^m and K =\mathbb{C}^n, and given x \in H and y \in K, one has the rank one operator x y^t : K \to H given by matrix multiplication. Observe that [A , B ] = Tr(B^* A) defines a semi-inner product (which is actually an inner product) on the linear maps K \to H. Notice further, that x*y \mapsto x y^t is a semi-inner preserving map from H*K to the linear maps from K to H.)

Definition 4: The Hilbert space tensor product of two Hilbert spaces H and K, denoted H \otimes K, is the Hausdorff completion of H*K. The image of h*k in H \otimes K is denoted h \otimes k. Vectors of the form h \otimes k are called simple tensors.

Note that

\langle h_1 \otimes k_1, h_2 \otimes k_2 \rangle_{H \otimes K} = \langle h_1, h_2 \rangle_H \langle k_1, k_2 \rangle_K.

Example: The Hilbert space tensor product of \mathbb{C}^m and \mathbb{C}^n can be identified with M_{m \times n}(\mathbb{C}), as in the hint of Exercise B.

Exercise C: For every h, h_1, h_2 \in H, k \in K and a \in \mathbb{C}, it holds that

(h_1 + h_2) \otimes k = h_1 \otimes k + h_2 \otimes k

(likewise with the roles of H and K reversed) and

(ah) \otimes k = a ( h \otimes k) = h \otimes (ak).

Exercise D: If \{e_m\} is an orthonormal basis for H and \{f_n\} is an orthonormal basis for K, then \{e_m \otimes f_n \} is an orthonormal basis for H \otimes K.

Exercise E: If A \subset H and B \subset K are sets such that \overline{span}A = H and \overline{span}B = K, then

H \otimes K = \overline{span\{a \otimes b : a \in A, b \in B\}}.

Let us fix notation for what follows. Let H,K be Hilbert spaces, let \{e_m\}_{m \in I} be an orthonormal basis for H and \{f_n\}_{n \in J} be an orthonormal basis for K. By Exercise D, every element in H \otimes K can be written as the norm convergent sum \sum_{m,n}c_{mn} e_m \otimes f_n. Rearranging, we see that every element in H \otimes K can be written as the norm convergent sum \sum_n h_n \otimes f_n, where h_n \in H and the summands are all orthogonal. In fact \langle h_i \otimes f_i, h_j \otimes f_j \rangle = \langle f_i, f_j \rangle \langle h_i, h_j \rangle = \delta_{ij} \|f_i\|^2 This gives rise to an identification

H \otimes K \cong \bigoplus_{n \in J} H_n,

where every H_n is a copy of H.

2. Tensor products of von Neumann algebras

Keep the notation from above. Given a \in B(H) and b \in B(K), we define the tensor product of  a and b, denoted a\otimes b : H \otimes K \to H \otimes K, by first defining it on simple tensors:

[a\otimes b] (h \otimes k) = ah \otimes bk.

One then wishes to extend this definition from simple tensors, first to finite sums of simple tensors, and then to the whole space H \otimes K. It suffices to show that a \otimes b defines a bounded operator on finite linear combinations of simple tensors. In fact, it is enough to consider a \otimes 1, because proving that 1 \otimes b is bounded is analogous, and then a \otimes b = (a \otimes 1) (1 \otimes b).

We shall make one more reduction: what we will actually work to show is that u \otimes 1 is an isometry, whenever u \in U(H) (in fact u \otimes 1 will be unitary, which is very easy to see once one knows that it is well defined). This will suffice because every bounded operator is the sum of four unitaries (Lecture 1). Now, if u is unitary, then

\langle [u \otimes 1] (h_1 \otimes k_1), [u \otimes 1] (h_2 \otimes k_2)\rangle = \langle uh_1, uh_2 \rangle \langle k_1, k_2 \rangle = \langle h_1 \otimes k_1 , h_2 \otimes k_2 \rangle.

This shows that u \otimes 1 preserves inner products, hence is an isometry on the space of finite sums of simple tensors. Whence u \otimes 1 extends to an isometry (actually a unitary) on H \otimes K.

Remark: The explanation we gave here is different than the one I gave in class. If you attended the lecture, can you see why I gave a different explanation?

Now that we know that a \otimes b \in B(H \otimes K), it is a simple matter to obtain

\|a \otimes b \| = \|a\| \|b\|.

The tensor product of operators enjoys some other nice properties:

  1. (a_1 + a_2) \otimes b = a_1 \otimes b + a_2 \otimes b (and likewise it is linear in the right factor),
  2. (a \otimes b)(a' \otimes b') = aa' \otimes bb',
  3. (a\otimes b)^* = a^* \otimes b^*.

Definition 5: Let A \subseteq B(H) and B \subseteq B(K) be von Neumann algebras. The von Neumann algebra tensor product (or simply: the tensor product) of A and B is the algebra

A \overline{\otimes} B := W^*(\{a \otimes b: a \in A, b \in B\} ).

i.e., A \overline{\otimes} B is the von Neumann algebra generated by all tensor products a \otimes b, where a \in A and b \in B.

Now for all j \in J, let U_j : H \to H \otimes K be the operator

U_j h = h \otimes f_j.

Letting U : \bigoplus_{n \in J} H_n \cong H \otimes K be the isomorphism mentioned above (given by U((h_n)_{n \in J}) = \sum_n h_n \otimes f_n) U_j can be considered also as the restriction U_j = U\big|_{H_j}, so U_j can be considered as an operator in B(H_j, H \otimes K).

For every T \in B(H \otimes K) and every i,j \in J define t_{ij} = U_i^* T U_j \in B(H_j, H_i) \cong B(H). This gives a (usually infinite) “operator block matrix” [t_{ij}]_{i,j \in J}. If |J| = k \in \mathbb{N}, then [t_{ij}] \in M_k(B(H)). There is a bijection

T \longleftrightarrow [t_{ij}]_{i,j \in J}

B(H \otimes K) \longleftrightarrow B(\oplus_{n \in J}H_n).

Operator block matrices follow the usual algebraic rules, and act on elements in \oplus_{n \in J} H_n by matrix-versus-column multiplication.

Proposition 6: \{U_i U_j^* : i,j \in J \}' = \{a \otimes 1 : a \in B(H)\}Moreover, T is in the above set if and only if there is some a \in B(H) such that t_{i,j} = \delta_{i,j} a

Proof: A direct calculation shows “\supseteq“. Conversely, if T U_i U_j^* = U_i U_j^* T then multiplying U_j^* T U_i  by U_j^*U_j = I = U_i^*U_i from both sides, and using the fact U_i, U_j are isometries with orthogonal ranges, one sees that t_{ij} = \delta_{ij} a for some a \in B(H). An operator T with such a diagonal block operator matrix is easily seen to operator as a \otimes 1.

Now define a *-representation \pi : B(H) \to B(H \otimes K) by \pi(a) = a \otimes 1.

Exercise F: Is \pi WOT/SOT continuous?

Now let u_{ij} be the rank one operator on B(K), given by u_{ij}(k) = \langle k, f_j \rangle f_i. A direct calculation shows that U_i U_j^* = 1 \otimes u_{ij}.

Proposition 7: Let A \subseteq B(H) be a von Neumann algebra. Then A \otimes I is a von Neumann algebra. To be precise: 

\pi(A) = \pi(A)'' = A \overline{\otimes} \mathbb{C}I_K.

Proof: Since U_i U_j^* = 1\otimes u_{i,j} \in \pi(A)', we have by the previous proposition \pi(A)'' \subseteq B(H) \otimes I = \pi(B(H)). Therefore if a \in \pi(A)'' then x = a \otimes 1 for some a \in B(H). Since \pi(A') \subseteq \pi(A)', a \otimes 1 commutes with b \otimes 1 for all b \in A'. Therefore a \in A so x \in \pi(A). This shows that \pi(A)'' \subseteq \pi(A), and since \pi(A) \subseteq \pi(A)'' is tautological, the proof is complete.

As a consequence, we obtain

\{1 \otimes u_{ij} : i,j \in J\}'' = I \otimes B(K).

Corollary: (B(H) \otimes I)' = I \otimes B(K).

Theorem 8: Let A \subseteq B(H) be a von Neumann algebra. Then 

A \overline{\otimes} B(K) = \{T \in B(H \otimes K) : \forall i,j . t_{ij} \in A\}.


  1. (A \overline{\otimes} B(K))' = A' \otimes I.
  2. (A \otimes I)' = A' \overline{\otimes} B(K).

Proof: See the following exercise.

Exercise G: Complete the details.

Project C: What about (A \overline{\otimes} B)'? It is very reasonable and elegant to conjecture that (A \overline{\otimes} B)'  = A' \overline{\otimes} B'. This is true, but (maybe surprisingly) highly non-trivial. For Project C, show that

  1. (A \overline{\otimes} B)' = A' \overline{\otimes} B', and
  2. Z(A \overline{\otimes} B) = Z(A) \overline{\otimes} Z(B).

Exercise H: Let A be a von Neumann algebra, and suppose that there exists a family \{v_{i,j}\}_{i,j \in I} \subseteq A such that

  1. v_{ij}^* = v_{ji},
  2. v_{ij}v_{kl} = \delta_{jk} v_{il},
  3. \sum_i v_{ii} = 1.

(Such a family is called a system of matrix units in A). Let p = v_{i_0 i_0} for some i_0 \in I. Then p \in P(A). Prove that

A \cong A_p \overline{\otimes} B(\ell^2(I)).

Using the above exercise, you can now prove:

Exercise I: If B is a type II_\infty factor on a separable Hilbert space, then there is a type II_1 factor A and a separable Hilbert space H such that

B \cong A \overline{\otimes} B(H).

3. The hyperfinite II_1 factor

We now meet a special, particular II_1 factor, called the hyperfinite II_1 factor. It is constructed as follows.

For every k=1,2,\ldots, let N_k = M_{2^k}(\mathbb{C}), and let \tau_k be the unique unital trace on N_k. The algebra N_k can be identified as a unital subalgebra of N_{k+1}, via the unital, injective and trace preserving *-homomorphism \phi_k : N_k \to N_{k+1} given by

\phi_k : a \mapsto \begin{pmatrix} a & 0 \\ 0 & a \end{pmatrix}.

Let M = \bigcup_{n=1}^\infty N_k be the normed *-algebra formed as the increasing union of the N_ks (all the algebraic operations are performed in one of the N_ks, that is, if a,b \in M, we find some k so that a,b \in N_k, and then we define a+b in N_k; same for ab, \|a\|, a^*). On M, we define a functional \tau : M \to \mathbb{C} by \tau(a) = \tau_k(a), if a \in N_k. Since the inclusions \phi_k are trace preserving (\phi_{k+1}\circ \phi_k = \tau_{k}), the functional \tau is well defined.

The hyperfinite II_1 factor is defined using the ingredients (M, \tau). The construction itself – called the GNS construction – is recurrent in the theory of operator algebras, so let us give it special attention.

3.1 The GNS representation

In this subsection, we let M denote a *-algebra satisfying some of the nice properties of the algebra M we just considered above. Thus, M does not have to be an increasing union of matrix algebras, it can also be a unital C*-algebra, or the increasing union (or direct limit) of unital C*-algebras. (I am not sure what is the optimal category of *-algebras for which the construction works.)

We also let \tau denote, not necessarily the tracial state treated above, but any state on M, by which we mean a positive (\tau(a^*a) \geq 0) and unital (\tau(1) = 1) linear functional on M. A state \tau : M \to \mathbb{C} is said to be faithful if \tau(a^*a) = 0 \Rightarrow a = 0.

Example: If M \subseteq B(H) is a *-subalgebra and x \in H is a unit vector, then w_{x,x} : a \mapsto \langle a x, x \rangle is a state. Such a state is called a vector statew_{x,x} is faithful if and only if x is separating.

The GNS representation will show that essentially all states are vector states (of course, not every state is really actually a vector state).

Theorem 9 (GNS representation): Given a pair (M, \tau) of a nice unital normed *-algebra as above, there exists a Hilbert space H_\tau, a *-representation \pi_\tau : M \to B(H_\tau), and a unit vector x_\tau \in H such that 

(*) [\pi_\tau(M) x_\tau] = H_\tau   (“x_\tau is cyclic“)


(**) \tau(a) = \langle \pi(a) x_\tau, x_\tau \rangle    for all   a \in M.

The triple (H_\tau, \pi_\tau, x_\tau) is called the GNS representation of (M,\tau), and is the unique such triple satisfying (*) and (**). 

Proof: We begin by defining an inner product on M:

\langle a, b \rangle_\tau := \tau(b^*a).

It is plain to see that \langle \cdot, \cdot \rangle_\tau is a sesqui-linear form, and it is positive because \tau is positive. We define H_\tau to be the Hausdorff completion of (M, \langle \cdot, \cdot \rangle), and we write \hat{a} for the image of a \in M in H_\tau. Put x_\tau = \hat{1}.

Define for every a \in M, the linear map \pi_\tau(a) : (M, \langle \cdot, \cdot \rangle_\tau) \to (M, \langle \cdot, \cdot \rangle_\tau) be given by

\pi_\tau(a) \hat{b} = \widehat{ab}.

After showing that \pi_\tau(a) is bounded on (M,\langle \cdot, \cdot \rangle_\tau), one can extend it to a linear operator on H_\tau. Boundedness follows from

\|\pi_\tau(a)\hat{b}\|_\tau^2 = \tau(b^* a^* a b) \leq \|a\|^2\tau(b^*b) = \|a\|^2 \|\hat{b}\|_\tau^2,

which follows from b^*a^*a b \leq \|a\|^2 b^* b (because a^* a \leq \|a\|^2 1.

It is then routine to check that \pi_\tau is a *-representation, and we omit this. We cannot omit the gratifying step of verifying that it satisfies (**):

\langle \pi_\tau(a) x_\tau, x_\tau \rangle_\tau = \tau(1^* a 1) = \tau(a).

The uniqueness is left as an exercise.

Exercise J: Prove the uniqueness of the GNS representation (make sure you first explain what uniqueness means).

Example: Let M= L^\infty(\mu), where \mu is a probability measure. Let \varphi_\mu be the state

\varphi_\mu(f) = \int f d \mu.

Then H_{\varphi_\mu} is the completion of L^\infty(\mu) with respect to the inner product

\langle f, g \rangle_{\varphi_\mu} = \int f \overline{g} d \mu,

that is, H_{\varphi_\mu} = L^2(\mu). Also, x_{\varphi_\mu} = 1, and the GNS representation \pi_{\varphi_\mu} is the representation by multiplication operators, given by \pi_{\varphi_\mu}(f) = M_f.

The above example leads the following notation: given a von Neumann algebra and a state (M,\tau), one writes L^2(M,\tau) for H_\tau, and \|\hat{a}\|_\tau = \|a\|_2.

Example: Consider L(G), for a countable group G, with the state \tau(a) = \langle a \delta_e, \delta_e \rangle. The the GNS representation is the identity representation.

3.2 The definition of R and its properties

Now we leave the general GNS construction and return to the particular choice of M = \cup N_k (where N_k = M_{2^k}(\mathbb{C})) with its state \tau = \lim \tau_k. Letting (H_\tau, \pi_\tau, x_\tau) be the GNS representation of (M,\tau) we now define R to be the von Neumann algebra generated by \pi_\tau:

R = \pi_\tau(M)''.

 R is called the hyperfinite II_1 factor. Since \tau is faithful on M, there is no quotient required in the Hausdorff completion, just completion; moreover \pi_\tau is faithful:

\|\pi_\tau(a)\|^2 \geq \|\pi_\tau(a)\xi_\tau\|^2 = \tau(a^*a) > 0,

if a \neq 0. By Theorem 1 in Lecture 3, every N_k \to R is isometric, so \pi_\tau is isometric. We can therefore push \tau to \pi_\tau(M):

\tau(\pi_\tau(a)) := \tau(a) = \langle \pi_\tau(a) x_\tau, x_\tau \rangle_\tau.

Being a vector state, \tau extends from \pi_\tau(M) to its WOT/SOT closure R. This state is, in fact, a trace: if x,y \in R, we invoke Kaplansky’s density theorem to find bounded nets a_\alpha \to x, b_\alpha \to y so

\tau(xy) = \lim_\alpha \langle a_\alpha b_\alpha x_\tau, x_\tau \rangle = \lim_\alpha \tau(a_\alpha b_\alpha) =

= \lim_\alpha \tau(b_\alpha a_\alpha) = \tau(yx).

\tau is faithful : if \tau(x x_\tau) = 0 then \|x x_\tau\|^2 = 0. But then for all a \in M, using that \tau is a trace,

\|x \hat{a}\|^2 = \langle a^* x^* x a x_\tau, x_\tau \rangle = \tau(a^*x^*x a ) =

= \tau(a a^* x^* x) = \langle a a^* x^* x x_\tau, x_\tau \rangle = 0.

Since M is dense in H_\tau, x = 0.

Now, we will show that R is a factor. It will follow that it is a II_1 factor, since it has a faithful (normal) trace.

Let p \in Z(R) = R \cap R'. Define \tau'(x) = \tau(px) for all x \in R. The functional \tau' is also a trace, because p is central. Therefore \tau' \big|_{N_k} is also a trace, and by uniqueness of the trace (in finite factors, in particular uniqueness of the trace in M_{2^k}(\mathbb{C})), it must hold that \tau' \big|_{N_k} = c_k \tau_k for some constant c_k. Since the inclusions N_k \subseteq N_{k+1} are trace preserving, it must be that c_k \equiv c for some c. Since M is WOT dense in R, \tau' = c \tau. But \tau(p) = \tau'(1) = c \tau(1) = c, so

0=  \tau(p(1-p)) = \tau'(1-p) = \tau(p) \tau(1-p).

We conclude that either \tau(p) = 0 or \tau(1-p) = 0. Since \tau is faithful, we have that either p = 0 or p=1.

This shows that Z(R) = \mathbb{C}1, so R is a factor.

Definition 10: A von Neumann algebra A is said to be hyperfinite (or AFD – approximately finite dimensional) if it contains a SOT dense increasing union of finite dimensional C*-algebras, that is A = \overline{\cup A_k}^{SOT}, where A_k are all finite dimensional C*-algebras.

The hyperfinite II_1 factor R is hyperfinite, by construction. By Exercise B in Lecture 4, L(S_\infty) is also hyperfinite (and also a II_1 factor). The reason that R is called THE hyperfinite II_1 factor, is because; it turns out that every hyperfinite II_1 factor is *-isomorphic to R. This is not trivial – it was proved in Murray and von Neumann’s fourth joint paper on the subject. We don’t have time to cover the proof. If you want a heavy project, this is a good choice.

Project D: Uniqueness of the hyperfinite II_1 factor (this project might be too big, and may spill over into the summer break. But if you are interested this can be a nice experience, we can discuss it, and see how much of it you can do).

4. Infinite tensor products

Here is another way to look at the hyperfinite II_1 factor.

M_{2^k} = M_2 \otimes \cdots \otimes M_2


\tau_k = \tau_1 \otimes \cdots \otimes \tau_1.

The imbedding M_{2^k} \to M_{2^{k+1}} is given by

x \mapsto x \otimes 1.

Therefore, one thinks of R as the infinite tensor product R = \overline{\otimes_{i=1}^\infty} M_2, and \tau = \overline{\otimes_{i=1}^\infty} \tau_1.

Using different finite dimensional algebras and different states (not necessarily traces), one gets different kinds of von Neumann algebras with states. Replacing the trace \tau_1 : M_2 \to \mathbb{C} with the state

\phi \begin{pmatrix}a & b \\ c & d \end{pmatrix} = \frac{1}{1 + \lambda} a + \frac{\lambda}{1+\lambda} d,

for \lambda \in (0,1), Powers obtained infinitely many type III factors.