### Advanced Analysis, Notes 2: Hilbert spaces (orthogonality, projection, orthonormal bases)

#### by Orr Shalit

*(Quick announcement: all lectures will from now on take place in room 201). *

In the previous lecture, we learned the very basics of Hilbert space theory. In this lecture we shall go one little bit further, and prove the basic structure theorems for Hilbert spaces.

#### 0. Continuity of the inner product

**Exercise A:** Let be an inner product space. Prove that the inner product is continuous with respect to the norm: if and in , then . Conclude in particular that if then .

#### 1. Orthogonality

**Definition 1:** *Let be an inner product space.*

**(a) **Two vectors are said to be **orthogonal**, denoted , if .

**(b) **A set of non-zero vectors is said to be an **orthogonal** set if for all .

**(c) **An orthogonal set is said to be an **orthonormal** set if for all . An orthonormal set is sometimes called and **orthonormal system.**

The following two easy propositions show how the geometry of inner product spaces has some close similarities with Euclidean geometry. These similarities invite mathematicians to use their geometric intuition when working in inner product spaces, and make these spaces especially lovable.

**Proposition 2 (Pythagorean identity): **

*In an inner product space, the following hold*

**(a)** If then .

**(b) **If is a finite orthogonal set then

**Proof:** **(a)** is a special case of **(b)**, which in turns follows from

.

**Example:** Let with the usual inner product. The set is an orthonormal set:

and this is equal to if and to if . The set is an orthogonal set, but not orthonormal. These two systems are also orthogonal sets in the larger space .

The Pythagorean identity is a very special property of inner product spaces, and it gets used all the time. Another immediate identity that holds in inner product spaces is the following.

**Proposition 3 (parallelogram law): ***For any in an inner product space, the following holds:*

This identity follows only from the fact that the norm in an inner product space is defined by a sesquilinear form. The parallelogram law differs from the Pythagorean identity in that it is stated only in terms of the norm, and makes no mention of the inner product. Thus, it can be used to show that there are norms not induced by an inner product. Though it does not get used so much, we will soon use the parallelogram law to prove a some of the most fundamental theorems in Hilbert space theory (see Section 2).

In the study of of vector spaces of finite dimension, bases play an important role. Recall that is a basis for a vector space if for all there exist unique scalars such that

When dealing with finite dimensional inner product spaces, orthonormal bases are particularly handy, because if is an *orthonormal* basis then the above is given by . Furthermore one has , and if then

The beautiful and useful fact is that *all of this remains true in any Hilbert space*. To explain what *all of this* means in infinite dimensional spaces requires a little care.

**Definition 4: **

*Let be any set, and let be a set of complex numbers. We say that the series converges to , and we write , if for all , there exists a finite set such that for every finite set*

*for which*,**Definition 4*: ***Let be any set, and let be a set of elements in an inner product space . We say that the series converges to , and we write , if for all , there exists a finite set such that for every finite set for which ,*

**Exercise B:** Prove that a series converges to if and only if there exists a countable set contained in such that **(a)** if ; and **(b)** for any rearrangement of , the limit exists in .

**Exercise C:** Suppose that for all . Prove that converges and is bounded by if and only if the set of all finite sums is bounded by .

**Proposition 5 (Bessel’s inequality):** *Let be an orthonormal set in an inner product space , and let . Then*

**Proof: **Let be finite. A computation shows that is orthogonal to . But , therefore, by the Pythagorean identity,

thus (by Pythagoras once more) . This holds for all , so (invoking Exercise C) the assertion follows.

**Exercise D:** Deduce the Cauchy-Schwarz inequality from Bessel’s inequality. Did this involve circular reasoning?

#### 2. Orthogonal decomposition and orthogonal projection

**Definition 6:** *A subset in a vector space is said to be convex if for all and all , the vector is also in . *

**Lemma 7:** *Let be a closed convex set in a Hilbert space . Then there is a unique of minimal norm. *

**Proof:** Put . Let be a sequence in such that . Applying the parallelogram law to and , we find

Now, , thus, letting , we find that the right hand side must tend to zero, hence is a Cauchy sequence. Since is complete and closed, there is an such that . By Exercise A, , this proves the existence of a norm minimizer. To prove the uniqueness, assume that . If we form the sequence then we have just seen above that this is a Cauchy sequence. It follows that .

**Theorem 8:** *Let be a closed convex set in a Hilbert space , and let . Then there exists a unique such that *

*for all . *

**Proof: **Apply Lemma 7 to the convex set . Any element in of minimal norm corresponds to a such that * is minimal. *

We call the element the **best approximation for within .** We denote by the function that assigns to each the best approximation for within . Since every subspace is convex, we immediately obtain the following.

**Corollary 9: ***Let be a closed subspace of , and let **. Then there exists a unique such that *

*for all . *

**Theorem 10:** *Let be a convex set in a Hilbert space , and let and . **The following are equivalent. *

*(in other words, is the best approximation for within ).**for all .*

**Proof: **Let . Then expanding the inequality

(which holds for all ), we get

Dividing by and cancelling some terms, we obtain . Thus 1 implies 2.

To get the implication 2 1, we write

**Corollary 11: ***Let be a closed subspace in a Hilbert space , *

*and let and . **The following are equivalent. *

*(in other words, is the best approximation for within ).**for all .*

**Definition 12:*** Let be an inner product space, and let . We define . *

**Exercise E:** Prove that is always a closed subspace, and that .

**Theorem 12: ***Let be a closed subspace in a Hilbert space . Then for every there is a unique and a unique such that . *

**Remark: **The conclusion of the theorem is usually denoted shortly by .

**Proof: **Write . Then by definition. Moreover, letting , we have by Corollary 11. Thus . For uniqueness, assume that . Then we have

and since we have and .

**Theorem 13: ***Let be a closed convex subset in a Hilbert space . The map satisfies , and if and only if . The map is linear if and only if is a subspace. *

**Exercise F:** Prove Theorem 13.

A mapping satisfying is called a **projection. **In the case that is a subspace, then is called **the orthogonal projection onto . **

**Theorem 14: ***Let be a subspace of a Hilbert space . Then is closed if and only if .*

**Proof: **Trivially, (“anything in is orthogonal to anything that is orthogonal to everything in “) and we already noted that is closed (Exercise E). So assume that , and write the orthogonal decomposition of with respect to the decomposition , that is , with . But , so this is also a decomposition of with respect to . However, already has a decomposition with respect to . The uniqueness clause in Theorem 12 now implies that .

#### 3. Projections with respect to orthonormal bases I

We know from a course in linear algebra that every finite dimensional inner product space has an orthonormal basis (see also the appendix). Let be a finite dimensional subspace of a Hilbert space .

**Exercise G: **Prove that a finite dimensional subspace of an inner product space is closed.

Let be an orthonormal basis for .

**Theorem 15:** *With the above notation, for all , *

**Proof:** Put . Since , then . (The familiar proof from the course in linear algebra goes as follows:

for some constants , and taking the inner product of this equality with one obtains

and the representation of is as we claimed. By the way, this shows that **every orthonormal set is linearly independent**). But by Corollary 11, , or , for all , therefore , as asserted.

#### 4. Orthonormal bases

Recall that a **Hamel**** basis** for a vector space is a family such that every can be written in a unique way as a (finite) linear combination of the s. A vector space is said to be **infinite dimensional** if it has no finite Hamel basis. In linear algebra, a Hamel basis is called simply a “basis”, since every kind of basis considered in the finite dimensional case is a Hamel basis.

**Exercise H:** Prove that if is an infinite dimensional Hilbert space, then has no countable Hamel basis.

Speaking a little bluntly, the above exercise shows that Hamel bases are totally useless in infinite dimensional Hilbert spaces. We need another notion of basis.

**Definition 16: ***Let be an orthonormal system in an inner product space . is said to be complete if . *

**Proposition 17:** *Every inner product space has a complete orthonormal system. *

**Proof: **One considers the set of all orthonormal systems in the space, ordered by inclusion, and applies Zorn’s Lemma to deduce the existence of a maximal orthonormal system. A maximal orthonormal system must be complete, otherwise one could add a normalized perpendicular vector.

In case that the inner product space in question is separable, one can also prove that there exists a complete orthonormal system by applying the Gram-Schmidt process (see the appendix) to a dense sequence.

**Definition 18:** *Let be an orthonormal system in an inner product space . For every , the scalars are called the (generalized) Fourier coefficients of with respect to . *

By Bessel’s inequality (Proposition 5) and Exercise B, for every , only coutably many Fourier coefficients are non-zero. This fact frees us, in the following proposition, to consider only countable orthonormal systems.

**Proposition 19:** *Let be an orthonormal system in an inner product space . Then the following are equivalent: *

*.**.**For all , there exists an integer and scalars such that .*

**Remark:** The convergence * *of the series of vectors in 2 is to be interpreted simply as the assertion that . Note that the equivalence 1 2 implies that this vector valued series converges regardless of the order in which it is summed, and also that this series also converges in the sense of Definition 4*.

**Proof:** As in the proof of Bessel’s inequality (Proposition 5) we find that

and this implies that 1 and 2 are equivalent. 2 obviously implies 3, because one simply takes .

Assume that 3 holds. Let be given. We need to find , such that for all , . Let be the from 3 corresponding to , and let be the corresponding scalars. For any , the linear combination is in the subpace spanned by , which we denote by . But by Theorem 15, is the best approximation for within , therefore

**Proposition 20: ***Let be an orthonormal system in a Hilbert space and let be a set of complex numbers. The series converges in if and only if . *

**Proof:** Suppose that converges to some . By Exercise B, there is a countable subset of , say , such that if and such that

Taking the inner product of the above with , we find that for all . Proposition 19 now tells us that , therefore .

Conversely, assume that . Let be a countable index set as above. Define . Then it is easy to see that is a Cauchy sequence. Let be the limit of this sequence. Then continuity of the inner product implies that for all . So we have . By the remark following Proposition 19,

and that completes the proof.

**Theorem 21: ***Let be a complete orthonormal system in a Hilbert space . Then for every the following hold:*

- .
- .

**Remark:** There are two ways to interpret the (possibly uncountable) series in 1. One way is as in Definition 4*. However, since we know that for any only countably many Fourier coefficients are nonzero, we can interpret this series as in Proposition 19. Both approaches turn out to be equivalent (but only when summing orthonormal sequences).

**Proof:** Fix . By Bessel’s inequality, . By Proposition 20, the series converges. Put . Our goal is to show that . Since is complete, it suffices to show that for all . By continuity of the inner product,

,

thus for all . Thus , so . By Proposition 19 assertions 1 and 2 are equivalent, thus the proof is complete.

**Question: **What role precisely does the completeness of the space play?

Because of the above theorem, a complete orthonormal system in a Hilbert space is often called an **orthonormal**** basis*** . *I think that this is highly justified terminology, and we will use it. An othonormal system (not necessarily in a Hilbert space) satisfying the conclusions of Theorem 21 is sometimes said to be a

**closed system.**

*Perhaps this is to avoid the usage of the word “basis”, since an orthonormal basis is definitely not a basis according to the definition given in linear algebra (see Exercise H).*

**Remark: **Assertion 2 in Theorem 21 is called **Parseval’s identity . **

**Corollary 22 (Generalized Parseval’s identity):** *Let be an orthonormal basis for a Hilbert space , and let . Then *

**Proof: **One uses Parseval’s identity together with the **polarization identity **

which holds in as well as in .

#### 5. Projections with respect to orthonormal bases II

**Theorem 23: ***Let be a closed subspace in a Hilbert space . Let be an orthonormal basis for . Then for every , *

**Exercise I:** Prove Theorem 22.

#### 6. Dimension and isomorphism

**Theorem 24:** *Let and be two orthonormal bases for the same Hilbert space . Then and have the same cardinality. *

**Proof:** If one of the index sets is finite then this result follows from linear algebra. So assume both sets are infinite. For every , let . Every belongs to at least one , because is complete. Therefore . But as we noted several times, . These two facts combine to show that the cardinality of is less than or equal to the cardinality of . Reversing the roles of and we see that they must have equal cardinality.

**Definition 25:** *Let be a Hilbert space. The dimension of is defined to be the cardinality of any orthonormal basis for . *

**Definition 26: (a) ***Let be inner product spaces. A unitary map (or simply a unitary) from to is a bijective linear map such that for all . (b) Two inner product spaces are said to be isomorphic if there exists a unitary between them. *

**Theorem 27:** *Two Hilbert spaces are isomorphic if and only if they have the same dimension. *

**Exercise J:** Prove Theorem 27.

**Exercise K: **Prove that a* *Hilbert space is separable if and only if its dimension is (recall that a metric space is said to be **separable**** **if it contains a countable dense subspace).

We have gone through some efforts to treat Hilbert spaces which are of arbitrary dimension. However, in mathematical practice, one rarely encounters (or wishes to encounter) a non-separable space. Nevertheless, rare things do happen. Occasionally it is useful to have an arbitrarily large Hilbert space for some universal construction to be carried out (for example, one needs this when proving that every C*-algebra is an algebra of operators on some Hilbert space). There are also some natural examples which arise in analysis.

**Exercise L:** Let be the linear span of all functions of the form . On , we define a form

Prove that is an inner product space. Let be the completion of . Prove that is not separable. Find an orthonormal basis for .

#### Appendix: Gram-Schmidt process

We now give a version of the Gram-Schmidt orthogonalization process appropriate for sequences of vectors. In the case where the sequence is finite, this is precisely the same procedure studied in a course in linear algebra.

**Theorem 28: ***Let be a sequence of vectors in an inner product space . Then there exists an orthonormal sequence with the same linear span as . If the sequence is linearly independent, then for all , *

**Proof:** From every sequence of vectors one can extract a linearly independent sequence, so it suffices to prove the second half of the theorem. Assume that is a linearly independent sequence. We prove the claim by induction on . For we put . Assume that , and that we have constructed an orthonormal sequence such that

Let denote the subspace appearing in the above equality, and let be the orthogonal projection onto . Then is not in . Put . Then , and by Corollary 11, . Let . Then is an orthonormal sequence, and by construction. Thus . But since are linearly independent (as any orthonormal set; see the proof of Theorem 15), and there of them, we must have . That completes the proof.

[…] to start applying the structure theory of Hilbert spaces that we developed in the previous two lectures, together with the Stone-Weierstrass Theorem we proved in the introduction, to obtain easily some […]

[…] does the trick. Indeed, let . Let be the decomposition of with respect to . Then by Theorem 15 in Notes 2, . Then we […]

[…] if is ergodic, then is the one dimensional space of constant functions, so (by Theorem 15 in Notes 2) […]

[…] proof of Proposition 4 is just like Theorem 5 in Notes 2, only simpler. The unique Banach space containing is called the completion of […]

[…] the algebraic span of , and therefore has a countable Hamel basis. This contradicts Exercise H in Notes 2 (Exercise H said that an infinite dimensional Hilbert space cannot have a countable Hamel basis. […]

[…] the algebraic span of , and therefore has a countable Hamel basis. This contradicts Exercise H in Notes 2 (Exercise H said that an infinite dimensional Hilbert space cannot have a countable Hamel basis. […]

The proof of theorem 15:

\[

\sum_{n=1} ^N (y,e_n) e_n

\]

should be replaced by

\[

y=\sum_{n=1} ^N (y,e_n) e_n

\]

Thanks Alon, corrected[…] Further read related material 1. https://noncommutativeanalysis.wordpress.com/2012/10/24/advanced-analysis-notes-2-hilbert-spaces-ort… […]