Advanced Analysis, Notes 2: Hilbert spaces (orthogonality, projection, orthonormal bases)

by Orr Shalit

(Quick announcement: all lectures will from now on take place in room 201). 

In the previous lecture, we learned the very basics of Hilbert space theory. In this lecture we shall go one little bit further, and prove the basic structure theorems for Hilbert spaces.

0. Continuity of the inner product

Exercise A: Let G be an inner product space. Prove that the inner product is continuous with respect to the norm: if x_n \rightarrow x and y_n \rightarrow y in G, then (x_n,y_n) \rightarrow (x,y). Conclude in particular that if x_n \rightarrow x then \|x_n\| \rightarrow \|x\|.

1. Orthogonality

Definition 1: Let G be an inner product space.

(a) Two vectors x,y \in G are said to be orthogonal, denoted x \perp y, if (x,y) = 0.

(b) A set of non-zero vectors \{e_i\}_{i \in I} \subseteq G is said to be an orthogonal set if e_i \perp e_j for all i \neq j.

(c) An orthogonal set \{e_i\}_{i \in I} \subseteq G is said to be an orthonormal set if \|e_i\| = 1 for all i \neq j. An orthonormal set is sometimes called and orthonormal system.

The following two easy propositions show how the geometry of inner product spaces has some close similarities with Euclidean geometry. These similarities invite mathematicians to use their geometric intuition when working in inner product spaces, and make these spaces especially lovable.

Proposition 2 (Pythagorean identity): In an inner product space, the following hold

(a) If x \perp y  then  \|x + y\|^2 = \|x\|^2 +\|y\|^2.

(b) If \{e_i\}_{i =1}^n is a finite orthogonal set then

\|\sum_i e_i \|^2 = \sum_i \|e_i\|^2 .

Proof: (a) is a special case of (b), which in turns follows from

\|\sum_i e_i\|^2 = (\sum_i e_i, \sum_j e_j) = \sum_{i,j=1}^n (e_i, e_j) = \sum_i \|e_i\|^2.

Example: Let G = C[0, 1] with the usual inner product. The set \{e^{2\pi inx}\}_{n=-\infty}^\infty  is an orthonormal set:

(e^{2\pi imx}, e^{2\pi inx}) = \int_0^1 e^{2\pi i(m-n)x} dx

and this is equal to 1 if m=n and to \frac{1}{2\pi i(m-n)} e^{2\pi i(m-n)x}\Big|_0^1 = 0 if m \neq n. The set \{1, \sin 2\pi n x, \cos 2\pi nx\}_{n=1}^\infty is an orthogonal set, but not orthonormal. These two systems are also orthogonal sets in the larger space H = L^2[0,1].

The Pythagorean identity is a very special property of inner product spaces, and it gets used all the time. Another immediate identity that holds in inner product spaces is the following.

Proposition 3 (parallelogram law): For any x,y in an inner product space, the following holds:

\|x+y\|^2 + \|x-y\|^2 = 2 \|x\|^2 + 2\|y\|^2 .

This identity follows only from the fact that the norm in an inner product space is defined by a sesquilinear form. The parallelogram law differs from the Pythagorean identity in that it is stated only in terms of the norm, and makes no mention of the inner product. Thus, it can be used to show that there are norms not induced by an inner product. Though it does not get used so much,  we will soon use the parallelogram law to prove a some of the most fundamental theorems in Hilbert space theory (see Section 2).

In the study of of vector spaces of finite dimension, bases play an important role. Recall that v_1, \ldots, v_n is a basis for a vector space G if for all x \in G there exist unique scalars c_1, \ldots, c_n such that

x = \sum_{i=1}^n c_i v_i .

When dealing with finite dimensional inner product spaces, orthonormal bases are particularly handy, because if v_1, \ldots, v_n is an orthonormal basis then the above c_i is given by (x,v_i). Furthermore one has \|x\|^2 = \sum_{i=1}^n |(x,v_i)|^2, and if y = \sum_i d_i v_i then

(x,y) = \sum_{i=1}^n c_i \overline{d}_i .

The beautiful and useful fact is that all of this remains true in any Hilbert space. To explain what all of this means in infinite dimensional spaces requires a little care.

Definition 4: Let I be any set, and let \{a_i\}_{i\in I} be  a set of complex numbers. We say that the  series \sum_{i \in I} a_i converges to s \in \mathbb{C}, and we write \sum_{i \in I} a_i = s, if for all \epsilon >0, there exists a finite set F_0 such that for every finite set  F \subseteq I for which F \supseteq F_0 ,

|\sum_{i \in F}a_i -s|< \epsilon .

Definition 4*: Let I be any set, and let \{x_i\}_{i\in I} be a set of elements in an inner product space G. We say that the  series \sum_{i \in I} x_i converges to x \in G, and we write \sum_{i \in I}x_i = x, if for all \epsilon >0, there exists a finite set F_0 such that for every finite set  F \subseteq I for which F \supseteq F_0 ,

\|\sum_{i \in F}x_i -x\|< \epsilon .

Exercise B: Prove that a series \sum_{i \in I} x_i converges to x if and only if there exists a countable set J = {j_1, j_2, \ldots, } contained in I such that (a) x_i = 0 if i \in I \setminus J; and (b) for any rearrangement J' = \{j'_1, j'_2, \ldots, \} of J, the limit \lim_{N \rightarrow \infty} \sum_{n=1}^N x_{j'_n} = x exists in G.

Exercise C: Suppose that a_i \geq 0 for all i. Prove that \sum_{i \in I} a_i converges and is bounded by M if and only if the set of all finite sums \{\sum_{i \in F} a_i : F \subseteq I \textrm{ finite} \} is bounded by M.

Proposition 5 (Bessel’s inequality): Let \{e_i\}_{i \in I} be an orthonormal set in an inner product space G, and let x \in G. Then

\sum_{i \in I} |(x,e_i)|^2 \leq \|x\|^2 .

Proof: Let F \subseteq I be finite.  A computation shows that \sum_{i \in F} (x,e_i) e_i is orthogonal to x - \sum_{i\in F} (x,e_i) e_i.  But x = \sum_{i\in F} (x,e_i)e_i + (x - \sum_{i\in F} (x,e_i)e_i), therefore, by the Pythagorean identity,

\|x\|^2 = \|\sum_{i\in F} (x,e_i)e_i\|^2 + \|x - \sum_{i\in F} (x,e_i)e_i\|^2 ,

thus (by Pythagoras once more) \sum_{i\in F} |(x,e_i)|^2 \leq \|x\|^2. This holds for all F, so (invoking Exercise C) the assertion follows.

Exercise D: Deduce the Cauchy-Schwarz inequality from Bessel’s inequality. Did this involve circular reasoning?

2. Orthogonal decomposition and orthogonal projection

Definition 6: A subset S in a vector space V is said to be convex if for all x, y \in S and all  t \in [0,1], the vector tx + (1-t) y is also in S

Lemma 7: Let S be a closed convex set in a Hilbert space H. Then there is a unique x \in S of minimal norm. 

Proof: Put d:= \inf\{\|y\| : y \in S\}. Let \{x_n\} be a sequence in S such that \|x_n\| \rightarrow d. Applying the parallelogram law to x_m/2 and x_n/2, we find

\|\frac{1}{2}(x_m - x_n)\|^2 = 2\|\frac{1}{2}x_n\|^2+2\|\frac{1}{2}x_m\|^2 - \|\frac{1}{2}(x_n + x_m)\|^2

Now, \|\frac{1}{2}(x_n + x_m)\|^2 \geq d^2, thus, letting m,n \rightarrow \infty, we find that the right hand side must tend to zero, hence \{x_n\} is a Cauchy sequence. Since H is complete and S closed, there is an x \in S such that x_n \rightarrow x. By Exercise A, \|x\| = d, this proves the existence of a norm minimizer. To prove the uniqueness, assume that y \in S, \|y\| = d. If we form the sequence x, y, x, y, \ldots then we have just seen above that this is a Cauchy sequence. It follows that x = y.

Theorem 8: Let S be a closed convex set in a Hilbert space H, and let x \in H. Then there exists a unique y \in S such that 

\|x-y\| \leq \|x - w\|

for all w \in S

Proof: Apply Lemma 7 to the convex set S - x. Any element in S - x of minimal norm corresponds to a y \in S such that \|y-x\| is minimal. 

We call the element y \in S the best approximation for x within S. We denote by P_S the function that assigns to each x \in H the best approximation for x within S. Since every subspace is convex, we immediately obtain the following.

Corollary 9: Let M be a closed subspace of H, and let x \in H. Then there exists a unique y \in M such that 

\|x-y\| \leq \|x - w\|

for all w \in M

Theorem 10: Let S be a convex set in a Hilbert space H, and let x \in H and y \in SThe following are equivalent. 

  1. y = P_S (x) (in other words, y is the best approximation for x within S). 
  2. Re(x - y, s-y) \leq 0 for all s \in S

Proof: Let y = P_s(x). Then expanding the inequality

\|x - (ts + (1-t)y)\|^2 \geq \|x - y\|^2 ,

(which holds for all t \in (0,1)), we get

\|x-y\|^2 - 2Re(x-y,t(s-y)) + \|t(s-y)\|^2 \geq \|x-y\|^2 .

Dividing by t and cancelling some terms, we obtain 2 Re (x-y, s-y) \leq t \|s-y\|^2. Thus 1 implies 2.

To get the implication 2 \Rightarrow 1, we write

\|x-s\|^2 = \|(x-y) - (s-y)\|^2 = \|x-y\|^2 - 2 Re(x-y,s-y) + \|s-y\|^2 \geq \|x-y\|^2 .

Corollary 11: Let M be a closed subspace in a Hilbert space H

and let x \in H and y \in MThe following are equivalent. 

  1. y = P_M (x) (in other words, y is the best approximation for x within M). 
  2. x - y \perp m for all m \in M

Definition 12: Let G be an inner product space, and let S \subseteq G. We define S^\perp := \{g \in G : \forall s \in S . (s,g) = 0\}

Exercise E: Prove that S^\perp is always a closed subspace, and that S \cap S^\perp \subseteq \{0\}.

Theorem 12: Let M be a closed subspace in a Hilbert space H. Then for every x \in H there is a unique y \in M and a unique z \in M^\perp such that x = y + z

Remark: The  conclusion of the theorem is usually denoted shortly by H = M \oplus M^\perp.

Proof: Write x = P_M x + (x - P_M x). Then y:= P_M x \in M by definition. Moreover, letting z:= x - P_M x, we have (z,m) = 0 by Corollary 11. Thus z \in M^\perp. For uniqueness, assume that x = y + z = y' + z'. Then we have

M \ni y - y' = z - z' \in M^\perp ,

and since M \cap M^\perp = \{0\} we have y = y' and z = z'.

Theorem 13: Let S be a closed convex subset in a Hilbert space H. The map P_S satisfies P_S \circ P_S = P_S , and P_S (x) = x if and only if x \in S. The map P_S is linear if and only if S is a subspace. 

Exercise F: Prove Theorem 13.

A mapping T satisfying T \circ T = T is called a projection. In the case that S is a subspace, then P_S is called the orthogonal projection onto S

Theorem 14: Let M be a subspace of a Hilbert space H. Then M is closed if and only if M = M^{\perp \perp}.

Proof: Trivially, M \subseteq M^{\perp\perp} (“anything in M is orthogonal to anything that is orthogonal to everything in M“) and we already noted that M^{\perp\perp} is closed (Exercise E). So assume that x \in M^{\perp\perp}, and write the orthogonal decomposition of x with respect to the decomposition H = M \oplus M^\perp, that is x = y + z, with y \in M, z \in M^{\perp}. But y \in M^{\perp\perp}, so this is also a decomposition of x with respect to H = M^\perp \oplus M^{\perp\perp}. However, x already has a decomposition x = x + 0 with respect to H = M^\perp \oplus M^{\perp\perp}. The uniqueness clause in Theorem 12 now implies that x = y \in M.

3. Projections with respect to orthonormal bases I

We know from a course in linear algebra that every finite dimensional inner product space has an orthonormal basis (see also the appendix). Let M be a finite dimensional subspace of a Hilbert space H.

Exercise G: Prove that a finite dimensional subspace of an inner product space is closed.

Let \{e_n\}_{n=1}^N be an orthonormal basis for M.

Theorem 15: With the above notation, for all x \in H

P_M x = \sum_{n=1}^N (x,e_n) e_n.

Proof: Put y = P_M x. Since y \in M, then y = \sum_{n=1}^N (y,e_n)e_n. (The familiar proof from the course in linear algebra goes as follows:

y = \sum_{n=1}^N c_n e_n

for some constants c_n, and taking the inner product of this equality with e_k one obtains

(y,e_k) = \sum_{n=1}^N c_n (e_n, e_k) = c_k ,

and the representation of y is as we claimed. By the way, this shows that every orthonormal set is linearly independent). But by Corollary 11, (x-y, e_n) = 0 ,  or (x, e_n) = (y,e_n),  for all n, therefore y = \sum_{n=1}^N (x,e_n) e_n ,  as asserted.

4. Orthonormal bases

Recall that a Hamel basis for a vector space V is a family \{v_i\}_{i \in I} such that every v \in V can be written in a unique way as a (finite) linear combination of the v_is. A vector space is said to be infinite dimensional if it has no finite Hamel basis. In linear algebra, a Hamel basis is called simply a “basis”, since every kind of basis considered in the finite dimensional case is a Hamel basis.

Exercise H: Prove that if H is an infinite dimensional Hilbert space, then H has no countable Hamel basis.

Speaking a little bluntly, the above exercise shows that Hamel bases are totally useless in infinite dimensional Hilbert spaces. We need another notion of basis.

Definition 16: Let E = \{e_i\}_{i \in I} be an orthonormal system in an inner product space G. E is said to be complete if E^\perp = \{0\}

Proposition 17: Every inner product space has a complete orthonormal system. 

Proof: One considers the set of all orthonormal systems in the space, ordered by inclusion, and applies Zorn’s Lemma to deduce the existence of a maximal orthonormal system. A maximal orthonormal system must be complete, otherwise one could add a normalized perpendicular vector.

In case that the inner product space in question is separable, one can also prove that there exists a complete orthonormal system by applying the Gram-Schmidt process (see the appendix) to a dense sequence.

Definition 18: Let \{e_i\}_{i \in I} be an orthonormal system in an inner product space G. For every x \in G, the scalars (x,e_i) are called the (generalized) Fourier coefficients of x with respect to \{e_i\}_{i \in I}

By Bessel’s inequality (Proposition 5) and Exercise B, for every x, only coutably many Fourier coefficients are non-zero. This fact frees us, in the following proposition, to consider only countable orthonormal systems.

Proposition 19: Let \{e_n\}_{n=1}^\infty be an orthonormal system in an inner product space G. Then the following are equivalent: 

  1. \sum_{n=1}^\infty |(x,e_n)|^2 = \|x\|^2
  2. \sum_{n=1}^\infty (x,e_n)e_n = x
  3. For all \epsilon > 0, there exists an integer N_0 and scalars a_1, \ldots, a_{N_0} such that \|x - \sum_{n=1}^{N_0} a_n e_n \| < \epsilon

Remark: The convergence  of the series of vectors in 2 is to be interpreted simply as the assertion that \lim \|\sum_{n=1}^N (x,e_n) e_n - x \| = 0. Note that the equivalence 1 \Leftrightarrow 2 implies that this vector valued series converges regardless of the order in which it is summed, and also that this series also converges in the sense of Definition 4*.

Proof: As in the proof of Bessel’s inequality (Proposition 5) we find that

\|x\|^2 = \sum_{n=1}^N |(x,e_n)|^2 + \|x - \sum_{n=1}^N (x,e_n)e_n\|^2 ,

and this implies that 1 and 2 are equivalent. 2 obviously implies 3, because one simply takes a_n = (x,e_n).

Assume that 3 holds. Let \epsilon > 0 be given. We need to find N_0, such that for all N \geq N_0, \|\sum_{n=1}^N (x,e_n)e_n - x \| < \epsilon. Let N_0 be the N_0 from 3 corresponding to \epsilon, and let a_1,\ldots, a_{N_0} be the corresponding scalars. For any N \geq N_0, the linear combination \sum_{n=1}^{N_0} a_n e_n is in the subpace spanned by e_1, \ldots, e_N, which we denote by M. But by Theorem 15,  P_M x = \sum_{n=1}^N (x,e_n) e_n is the best approximation for x within M, therefore

\|\sum_{n=1}^N (x,e_n) e_n - x\| \leq \|\sum_{n=1}^{N_0} a_n e_n - x\| < \epsilon .

Proposition 20: Let \{e_i\}_{i \in I} be an orthonormal system in a Hilbert space H and let \{a_i\}_{i \in I} be a set of complex numbers. The series \sum_{i \in I}a_i e_i converges in H if and only if  \sum_{i \in I}|a_i|^2 < \infty

Proof: Suppose that \sum_{i \in I}a_i e_i converges to some x \in H. By Exercise B, there is a countable subset of I, say J = \{j_1, j_2, \ldots, \}, such that a_i = 0 if i \notin J and such that

\lim_{N \rightarrow \infty} \sum_{n=1}^N a_{j_n} e_{j_n} = x .

Taking the inner product of the above with e_i, we find that a_i = (x, e_i) for all i \in I. Proposition 19 now tells us that \sum_{n=1}^\infty |a_{j_n}|^2< \infty, therefore \sum_{i \in I}|a_i|^2 < \infty.

Conversely, assume that \sum_{i \in I}|a_i|^2 < \infty. Let J be a countable index set as above. Define x_N = \sum_{n=1}^N a_n e_{j_n}. Then it is easy to see that \{x_N\} is a Cauchy sequence. Let x be the limit of this sequence. Then continuity of the inner product implies that (x, e_i) = a_i for all i \in I. So we have x = \lim_{N\rightarrow \infty} \sum_{n=1}^N (x, e_{j_n}) e_{j_n}. By the remark following Proposition 19,

x = \sum_{j \in J} (x, e_{j}) e_{j} = \sum_{i \in I}a_i e_i ,

and that completes the proof.

Theorem 21: Let \{e_i\}_{i \in I} be a complete orthonormal system in a Hilbert space H. Then for every x \in H the following hold:

  1. x = \sum_{i \in I} (x, e_i) e_i.
  2. \|x\|^2 = \sum_{i \in I} |(x, e_i)|^2.

Remark: There are two ways to interpret the (possibly uncountable) series in 1. One way is as in  Definition 4*. However, since we know that for any x only countably many Fourier coefficients are nonzero, we can interpret this series as in Proposition 19. Both approaches turn out  to be equivalent (but only when summing orthonormal sequences).

Proof: Fix x \in H. By Bessel’s inequality, \sum_{i \in I}|(x,e_i)|^2 < \infty. By Proposition 20, the series \sum_{i \in I} (x,e_i) e_i converges. Put y = x - \sum_{i \in I} (x,e_i) e_i. Our goal is to show that y = 0. Since \{e_i\}_{i \in I} is complete, it suffices to show that (y, e_k) = 0 for all k \in I. By continuity of the inner product,

(\sum_{i\in I}(x,e_i)e_i, e_k) = (x,e_k) (e_k, e_k) = (x, e_k),

thus (y, e_k) = (x,e_k) - (x, e_k) = 0 for all k. Thus y \perp \{e_i\}_{i \in I}, so y = 0 . By Proposition 19 assertions 1 and 2 are equivalent, thus the proof is complete.

Question: What role precisely does the completeness of the space play?

Because of the above theorem, a complete orthonormal system in a Hilbert space is often called an orthonormal basis. I think that this is highly justified terminology, and we will use it. An othonormal system (not necessarily in a Hilbert space) satisfying the conclusions of Theorem 21 is sometimes said to be a closed system. Perhaps this is to avoid the usage of the word “basis”, since an orthonormal basis is definitely not a basis according to the definition given in linear algebra (see Exercise H).

Remark: Assertion 2 in Theorem 21 is called Parseval’s identity. 

Corollary 22 (Generalized Parseval’s identity): Let \{e_i\}_{i \in I} be an orthonormal basis for a Hilbert space H, and let x, y \in H. Then 

(x,y) = \sum_{i \in I} (x,e_n) \overline{(y,e_n)} .

Proof: One uses Parseval’s identity together with the polarization identity 

(g,h) = \frac{1}{4}\left( \|x+y\|^2 - \|x-y\|^2 + i \|x + iy\|^2 - i\|x-iy\|^2 \right) ,

which holds in H as well as in \mathbb{C}.

5. Projections with respect to orthonormal bases II

Theorem 23: Let M be a closed subspace in a Hilbert space H. Let \{e_i\}_{i \in I} be an orthonormal basis for M. Then for every x \in H

P_M x = \sum_{i \in I}(x,e_i) e_i .

Exercise I: Prove Theorem 22.

6. Dimension and isomorphism

Theorem 24: Let \{e_i\}_{i \in I} and \{f_j \}_{j \in J} be two orthonormal bases for the same Hilbert space H. Then I and J have the same cardinality.

Proof: If one of the index sets is finite then this result follows from linear algebra. So assume both sets are infinite. For every i \in I, let A_i = \{ j \in J : (e_i, f_j) \neq 0\}. Every j \in J belongs to at least one A_i, because \{e_i\}_{i \in I} is complete. Therefore J = \cup_{i \in I} A_i. But as we noted several times, |A_i| \leq \aleph_0. These two facts combine to show that the cardinality of J is less than or equal to the cardinality of I. Reversing the roles of I and J we see that they must have equal cardinality.

Definition 25: Let H be a Hilbert space. The dimension of H is defined to be the cardinality of any orthonormal basis for H

Definition 26: (a) Let G_1,G_2 be inner product spaces. A unitary map (or simply a unitary) from G_1 to G_2 is a bijective linear map U : G_1 \rightarrow G_2 such that (Ug, Uh)_{G_2} = (g,h)_{G_1} for all g,h \in G_1. (b) Two inner product spaces G_1,G_2 are said to be isomorphic if there exists a unitary between them. 

Theorem 27: Two Hilbert spaces are isomorphic if and only if they have the same dimension. 

Exercise J: Prove Theorem 27.

Exercise K: Prove that a Hilbert space is separable if and only if its dimension is \aleph_0 (recall that a metric space is said to be separable if it contains a countable dense subspace).

We have gone through some efforts to treat Hilbert spaces which are of arbitrary dimension. However, in mathematical practice, one rarely encounters (or wishes to encounter) a non-separable space. Nevertheless, rare things do happen. Occasionally it is useful to have an arbitrarily large Hilbert space for some universal construction to be carried out (for example, one needs this when proving that every C*-algebra is an algebra of operators on some Hilbert space). There are also some natural examples which arise in analysis.

Exercise L: Let G be the linear span of all functions of the form \sum_{k=1}^n a_k e^{i \lambda_k t}. On G, we define a form

(f,g) = \lim_{T \rightarrow \infty} \frac{1}{2T}\int_{-T}^T f(t) \overline{g(t)} dt .

Prove that G is an inner product space. Let H be the completion of G. Prove that H is not separable. Find an orthonormal basis for H.

Appendix:  Gram-Schmidt process

We now give a version of the Gram-Schmidt orthogonalization process appropriate for sequences of vectors. In the case where the sequence is finite, this is precisely the same procedure studied in a course in linear algebra.

Theorem 28: Let v_1, v_2, \ldots be a sequence of vectors in an inner product space G. Then there exists an orthonormal sequence e_1, e_2, \ldots with the same linear span as v_1, v_2, \ldots. If the sequence v_1, v_2, \ldots is linearly independent, then for all n=1, 2, \ldots

\textrm{span}\{e_1, \ldots, e_n\} = \textrm{span}\{v_1, \ldots, v_n\}.

Proof: From every sequence of vectors one can extract a linearly independent sequence, so it suffices to prove the second half of the theorem. Assume that v_1, v_2, \ldots is a linearly independent sequence. We prove the claim by induction on n. For n=1 we put e_1 = v_1 / \|v_1\|. Assume that n > 1, and that we have constructed an orthonormal sequence e_1, \ldots, e_{n-1} such that

\textrm{span}\{e_1, \ldots, e_{n-1}\} = \textrm{span}\{v_1, \ldots, v_{n-1}\}.

Let M denote the subspace appearing in the above equality, and let P_M be the orthogonal projection onto M. Then v_n is not in M. Put u = v_n - P_M v_n. Then u \neq 0, and by Corollary 11, u \in M^\perp. Let e_n = u/\|u\|. Then e_1, \ldots, e_n is an orthonormal sequence, and e_n \in \textrm{span}\{v_1, \ldots, v_n\} by construction. Thus \textrm{span}\{e_1, \ldots, e_{n}\} \subseteq \textrm{span}\{v_1, \ldots, v_{n}\}. But since e_1 \ldots, e_n are linearly independent (as any orthonormal set; see the proof of Theorem 15), and there n of them, we must have \textrm{span}\{e_1, \ldots, e_n\} = \textrm{span}\{v_1, \ldots, v_n\}. That completes the proof.

Advertisements