Homework 10
1. Consider R 4 \R^4 R 4 . Let u = [ 1 2 3 4 ] \mathbf{u} = \begin{bmatrix} 1 \\ 2 \\ 3 \\ 4\end{bmatrix} u = 1 2 3 4 and v = [ 5 6 7 8 ] \mathbf{v} = \begin{bmatrix} 5 \\ 6 \\ 7 \\ 8\end{bmatrix} v = 5 6 7 8 . Find the basis for the orthogonal complement of W = span { u , v } W = \operatorname{span}\set{\mathbf{u}, \mathbf{v}} W = span { u , v } .
Let A = [ ∣ ∣ u v ∣ ∣ ] A = \begin{bmatrix}
| & | \\
\mathbf{u} & \mathbf{v} \\
| & |
\end{bmatrix} A = ∣ u ∣ ∣ v ∣ . Then, W ⊥ = ker A ⊤ = ker [ 1 2 3 4 5 6 7 8 ] W^\perp = \ker A^\top = \ker\begin{bmatrix}
1 & 2 & 3 & 4 \\
5 & 6 & 7 & 8
\end{bmatrix} W ⊥ = ker A ⊤ = ker [ 1 5 2 6 3 7 4 8 ] .
rref [ 1 2 3 4 5 6 7 8 ] = [ 1 0 − 1 − 2 0 1 2 3 ] ∴ y = − 2 z − 3 w x = z + 2 w z , w ∈ R \operatorname{rref}\begin{bmatrix}
1 & 2 & 3 & 4 \\
5 & 6 & 7 & 8
\end{bmatrix} = \begin{bmatrix}
1 & 0 & -1 & -2 \\
0 & 1 & 2 & 3
\end{bmatrix} \\[1em]
\therefore y = -2z-3w \\
x = z + 2w \\
z,w\in\R rref [ 1 5 2 6 3 7 4 8 ] = [ 1 0 0 1 − 1 2 − 2 3 ] ∴ y = − 2 z − 3 w x = z + 2 w z , w ∈ R
As such,
W ⊥ = { [ z + 2 w − 2 z − 3 w z w ] : z , w ∈ R } = span { [ 1 − 2 1 0 ] , [ 2 − 3 0 1 ] } . W^\perp = \Set{
\begin{bmatrix}
z + 2w \\
-2z-3w \\
z \\
w
\end{bmatrix}
: z,w \in\R
} = \operatorname{span}\Set{
\begin{bmatrix}
1 \\ -2 \\ 1 \\ 0
\end{bmatrix},
\begin{bmatrix}
2 \\ -3 \\ 0 \\ 1
\end{bmatrix}
}. W ⊥ = ⎩ ⎨ ⎧ z + 2 w − 2 z − 3 w z w : z , w ∈ R ⎭ ⎬ ⎫ = span ⎩ ⎨ ⎧ 1 − 2 1 0 , 2 − 3 0 1 ⎭ ⎬ ⎫ .
2. Find the orthogonal projection of the vector ( 1 , 1 , 1 ) (1, 1, 1) ( 1 , 1 , 1 ) onto the subspace defined by the equations { x + y + z = 0 , x − y − 2 z = 0 , \begin{cases}x + y + z = 0, \\x − y − 2z = 0, \\\end{cases} { x + y + z = 0 , x − y − 2 z = 0 ,
Let W W W be the subspace defined by the above equation.
rref [ 1 1 1 1 − 1 − 2 ] = [ 1 0 − 1 2 0 1 3 2 ] ∴ y = − 3 2 z x = 1 2 z z ∈ R ∴ W = { [ 1 2 z − 3 2 z z ] : z ∈ R } = span { [ 1 2 − 3 2 1 ] } \operatorname{rref}\begin{bmatrix}
1 & 1 & 1 \\
1 & -1 & -2
\end{bmatrix} = \begin{bmatrix}
1 & 0 & -\frac{1}{2} \\[.5em]
0 & 1 & \frac{3}{2}
\end{bmatrix}
\\[1em]
\therefore y = -\frac{3}{2}z \\[.5em]
x = \frac{1}{2}z \\[.5em]
z\in\R \\[.5em]
\therefore W = \Set{
\begin{bmatrix}
\frac{1}{2}z \\[.5em]
-\frac{3}{2}z \\[.5em]
z
\end{bmatrix}: z\in\R
} = \operatorname{span}\Set{
\begin{bmatrix}
\frac{1}{2} \\[.5em]
-\frac{3}{2} \\[.5em]
1
\end{bmatrix}
} rref [ 1 1 1 − 1 1 − 2 ] = [ 1 0 0 1 − 2 1 2 3 ] ∴ y = − 2 3 z x = 2 1 z z ∈ R ∴ W = ⎩ ⎨ ⎧ 2 1 z − 2 3 z z : z ∈ R ⎭ ⎬ ⎫ = span ⎩ ⎨ ⎧ 2 1 − 2 3 1 ⎭ ⎬ ⎫
Let x ⃗ = [ 1 1 1 ] \vector{x}=\begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix} x = 1 1 1 and v ⃗ = [ 1 2 − 3 2 1 ] \vector{v}=\begin{bmatrix}
\frac{1}{2} \\[.5em]
-\frac{3}{2} \\[.5em]
1
\end{bmatrix} v = 2 1 − 2 3 1 . Then, the orthogonal projection of x ⃗ = [ 1 1 1 ] \vector{x}=\begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix} x = 1 1 1 is given by
proj W ( x ⃗ ) = ⟨ x ⃗ , v ⃗ ⟩ ∣ ∣ v ⃗ ∣ ∣ 2 v ⃗ = ⟨ [ 1 1 1 ] , [ 1 2 − 3 2 1 ] ⟩ ∣ ∣ [ 1 2 − 3 2 1 ] ∣ ∣ 2 [ 1 2 − 3 2 1 ] = 1 2 − 3 2 + 1 1 4 + 9 4 + 1 [ 1 2 − 3 2 1 ] = 0 ⃗ . \begin{align*}
\operatorname{proj}_W(\vector{x})
= \frac{\langle\vector{x},\vector{v}\rangle}{||\vector{v}||^2}\vector{v}
&= \frac{\left\langle\begin{bmatrix}
1 \\ 1 \\ 1
\end{bmatrix},\begin{bmatrix}
\frac{1}{2} \\[.5em]
-\frac{3}{2} \\[.5em]
1
\end{bmatrix}\right\rangle}
{\left|\left|\begin{bmatrix}
\frac{1}{2} \\[.5em]
-\frac{3}{2} \\[.5em]
1
\end{bmatrix}\right|\right|^2}
\begin{bmatrix}
\frac{1}{2} \\[.5em]
-\frac{3}{2} \\[.5em]
1
\end{bmatrix} \\
&= \frac{\frac{1}{2}-\frac{3}{2}+1}
{\frac{1}{4}+\frac{9}{4}+1}\begin{bmatrix}
\frac{1}{2} \\[.5em]
-\frac{3}{2} \\[.5em]
1
\end{bmatrix} \\
&= \vector{0}.
\end{align*} proj W ( x ) = ∣∣ v ∣ ∣ 2 ⟨ x , v ⟩ v = 2 1 − 2 3 1 2 ⟨ 1 1 1 , 2 1 − 2 3 1 ⟩ 2 1 − 2 3 1 = 4 1 + 4 9 + 1 2 1 − 2 3 + 1 2 1 − 2 3 1 = 0 .
3. Find the orthogonal basis of R 3 \R^3 R 3 with [ 1 1 0 ] \begin{bmatrix} 1 \\ 1 \\ 0\end{bmatrix} 1 1 0 as one of the vectors. Hint: You can use Gram-Schmidt process on a basis with [ 1 1 0 ] \begin{bmatrix} 1 \\ 1 \\ 0\end{bmatrix} 1 1 0 as the first vector. e.g. { [ 1 1 0 ] , [ 0 1 0 ] , [ 0 0 1 ] } . \Set{ \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}}. ⎩ ⎨ ⎧ 1 1 0 , 0 1 0 , 0 0 1 ⎭ ⎬ ⎫ .
Let w ⃗ 1 = [ 1 1 0 ] \vector{w}_1 = \begin{bmatrix}
1 \\ 1 \\ 0
\end{bmatrix} w 1 = 1 1 0 , w ⃗ 2 = [ 0 1 0 ] \vector{w}_2 = \begin{bmatrix}
0 \\ 1 \\ 0
\end{bmatrix} w 2 = 0 1 0 , and w ⃗ 3 = [ 0 0 1 ] \vector{w}_3 = \begin{bmatrix}
0 \\ 0 \\ 1
\end{bmatrix} w 3 = 0 0 1 .
Let v ⃗ 1 = w ⃗ 1 = [ 1 1 0 ] \vector{v}_1 = \vector{w}_1 = \begin{bmatrix}
1 \\ 1 \\ 0
\end{bmatrix} v 1 = w 1 = 1 1 0 . Then, using the Gram–Schmidt process, the orthogonal vectors v ⃗ 2 \vector{v}_2 v 2 and v ⃗ 3 \vector{v}_3 v 3 are given by the following.
v ⃗ 2 = w ⃗ 2 − proj v 1 ⃗ ( w ⃗ 2 ) = w ⃗ 2 − ⟨ w ⃗ 2 , v ⃗ 1 ⟩ ∣ ∣ v ⃗ 1 ∣ ∣ 2 v ⃗ 1 = [ 0 1 0 ] − ⟨ [ 0 1 0 ] , [ 1 1 0 ] ⟩ ∣ ∣ [ 1 1 0 ] ∣ ∣ 2 [ 1 1 0 ] = [ 0 1 0 ] − 0 ( 1 ) + 1 ( 1 ) + 0 ( 0 ) 1 2 + 1 2 + 0 2 [ 1 1 0 ] = [ − 1 2 1 2 0 ] v ⃗ 3 = w ⃗ 3 − proj v ⃗ 1 ( w ⃗ 3 ) − proj v ⃗ 2 ( w ⃗ 3 ) = w ⃗ 3 − ⟨ w ⃗ 3 , v ⃗ 1 ⟩ ∣ ∣ v ⃗ 1 ∣ ∣ 2 v ⃗ 1 − ⟨ w ⃗ 3 , v ⃗ 2 ⟩ ∣ ∣ v ⃗ 2 ∣ ∣ 2 v ⃗ 2 = [ 0 0 1 ] − ⟨ [ 0 0 1 ] , [ 1 1 0 ] ⟩ ∣ ∣ [ 1 1 0 ] ∣ ∣ 2 [ 1 1 0 ] − ⟨ [ 0 0 1 ] , [ − 1 2 1 2 0 ] ⟩ ∣ ∣ [ − 1 2 1 2 0 ] ∣ ∣ 2 [ − 1 2 1 2 0 ] = [ 0 0 1 ] − 0 [ 1 1 0 ] − 0 [ − 1 2 1 2 0 ] = [ 0 0 1 ] \def<{\left\langle} \def>{\right\rangle}
\def\norm#1{\left|\left|#1\right|\right|}
\begin{align*}
\vector{v}_2 &= \vector{w}_2 - \operatorname{proj}_{\vector{v_1}}(\vector{w}_2) \\
&= \vector{w}_2
- \frac{<\vector{w}_2,\vector{v}_1>}{||\vector{v}_1||^2}\vector{v}_1 \\
&= \begin{bmatrix}
0 \\ 1 \\ 0
\end{bmatrix} - \frac{<\begin{bmatrix}
0 \\ 1 \\ 0
\end{bmatrix}, \begin{bmatrix}
1 \\ 1 \\ 0
\end{bmatrix}>}{\norm{\begin{bmatrix}
1 \\ 1 \\ 0
\end{bmatrix}}^2}\begin{bmatrix}
1 \\ 1 \\ 0
\end{bmatrix} \\
&= \begin{bmatrix}
0 \\ 1 \\ 0
\end{bmatrix} - \frac{0(1) + 1(1) + 0(0)}{1^2 + 1^2 + 0^2}
\begin{bmatrix}
1 \\ 1 \\ 0
\end{bmatrix} \\
&= \begin{bmatrix}
-\frac{1}{2} \\ \frac{1}{2} \\ 0
\end{bmatrix}
\\[3em]
\vector{v}_3 &= \vector{w}_3
- \operatorname{proj}_{\vector{v}_1}(\vector{w}_3)
- \operatorname{proj}_{\vector{v}_2}(\vector{w}_3) \\
&= \vector{w}_3
- \frac{<\vector{w}_3, \vector{v}_1>}{\norm{\vector{v}_1}^2}\vector{v}_1
- \frac{<\vector{w}_3, \vector{v}_2>}{\norm{\vector{v}_2}^2}\vector{v}_2 \\
&= \begin{bmatrix}
0 \\ 0 \\ 1
\end{bmatrix} - \frac{<\begin{bmatrix}
0 \\ 0 \\ 1
\end{bmatrix}, \begin{bmatrix}
1 \\ 1 \\ 0
\end{bmatrix}>}{\norm{\begin{bmatrix}
1 \\ 1 \\ 0
\end{bmatrix}}^2}\begin{bmatrix}
1 \\ 1 \\ 0
\end{bmatrix} - \frac{<\begin{bmatrix}
0 \\ 0 \\ 1
\end{bmatrix}, \begin{bmatrix}
-\frac{1}{2} \\ \frac{1}{2} \\ 0
\end{bmatrix}>}{\norm{\begin{bmatrix}
-\frac{1}{2} \\ \frac{1}{2} \\ 0
\end{bmatrix}}^2}\begin{bmatrix}
-\frac{1}{2} \\ \frac{1}{2} \\ 0
\end{bmatrix} \\
&= \begin{bmatrix}
0 \\ 0 \\ 1
\end{bmatrix} - 0\begin{bmatrix}
1 \\ 1 \\ 0
\end{bmatrix} - 0\begin{bmatrix}
-\frac{1}{2} \\ \frac{1}{2} \\ 0
\end{bmatrix} \\
&= \begin{bmatrix}
0 \\ 0 \\ 1
\end{bmatrix}
\end{align*} v 2 v 3 = w 2 − proj v 1 ( w 2 ) = w 2 − ∣∣ v 1 ∣ ∣ 2 ⟨ w 2 , v 1 ⟩ v 1 = 0 1 0 − 1 1 0 2 ⟨ 0 1 0 , 1 1 0 ⟩ 1 1 0 = 0 1 0 − 1 2 + 1 2 + 0 2 0 ( 1 ) + 1 ( 1 ) + 0 ( 0 ) 1 1 0 = − 2 1 2 1 0 = w 3 − proj v 1 ( w 3 ) − proj v 2 ( w 3 ) = w 3 − ∣ ∣ v 1 ∣ ∣ 2 ⟨ w 3 , v 1 ⟩ v 1 − ∣ ∣ v 2 ∣ ∣ 2 ⟨ w 3 , v 2 ⟩ v 2 = 0 0 1 − 1 1 0 2 ⟨ 0 0 1 , 1 1 0 ⟩ 1 1 0 − − 2 1 2 1 0 2 ⟨ 0 0 1 , − 2 1 2 1 0 ⟩ − 2 1 2 1 0 = 0 0 1 − 0 1 1 0 − 0 − 2 1 2 1 0 = 0 0 1
As such, an orthogonal basis of R 3 \R^3 R 3 is { [ 1 1 0 ] , [ − 1 2 1 2 0 ] , [ 0 0 1 ] } \Set{
\begin{bmatrix}
1 \\ 1 \\ 0
\end{bmatrix},
\begin{bmatrix}
-\frac{1}{2} \\ \frac{1}{2} \\ 0
\end{bmatrix},
\begin{bmatrix}
0 \\ 0 \\ 1
\end{bmatrix}
} ⎩ ⎨ ⎧ 1 1 0 , − 2 1 2 1 0 , 0 0 1 ⎭ ⎬ ⎫ .
4.
(a) Find an orthonormal basis for the kernel of the following matrix. A = [ 2 1 0 − 1 3 2 − 1 − 1 ] . A = \begin{bmatrix}2 & 1 & 0 & −1 \\3 & 2 & −1 & −1\end{bmatrix}. A = [ 2 3 1 2 0 − 1 − 1 − 1 ] .
rref ( A ) = [ 1 0 1 − 1 0 1 − 2 1 ] ∴ y = 2 z − w x = − z + w ker ( A ) = { [ − z + w 2 z − w z w ] : z , w ∈ R } = span { [ − 1 2 1 0 ] , [ 1 − 1 0 1 ] } \operatorname{rref}(A) = \begin{bmatrix}
1 & 0 & 1 & -1 \\
0 & 1 & -2 & 1
\end{bmatrix}
\\
\therefore y = 2z-w \\
x = -z+w \\
\ker(A) = \Set{
\begin{bmatrix}
-z+w \\
2z-w \\
z \\
w
\end{bmatrix}: z,w\in\R
} = \operatorname{span}\Set{
\begin{bmatrix}
-1 \\ 2 \\ 1 \\ 0
\end{bmatrix},
\begin{bmatrix}
1 \\ -1 \\ 0 \\ 1
\end{bmatrix}
} rref ( A ) = [ 1 0 0 1 1 − 2 − 1 1 ] ∴ y = 2 z − w x = − z + w ker ( A ) = ⎩ ⎨ ⎧ − z + w 2 z − w z w : z , w ∈ R ⎭ ⎬ ⎫ = span ⎩ ⎨ ⎧ − 1 2 1 0 , 1 − 1 0 1 ⎭ ⎬ ⎫
Let u ⃗ 1 = [ − 1 2 1 0 ] , u ⃗ 2 = [ 1 − 1 0 1 ] \vector{u}_1 = \begin{bmatrix}
-1 \\ 2 \\ 1 \\ 0
\end{bmatrix}, \vector{u}_2 = \begin{bmatrix}
1 \\ -1 \\ 0 \\ 1
\end{bmatrix} u 1 = − 1 2 1 0 , u 2 = 1 − 1 0 1 . Then, { u ⃗ 1 , u ⃗ 2 } \set{\vector{u}_1, \vector{u}_2} { u 1 , u 2 } is a basis of ker ( A ) \ker(A) ker ( A ) (and are linearly independent).
To produce an orthogonal basis, we apply the Gram–Schmidt process. Let v ⃗ 1 = u ⃗ 1 = [ − 1 2 1 0 ] \vector{v}_1 = \vector{u}_1 = \begin{bmatrix}
-1 \\ 2 \\ 1 \\ 0
\end{bmatrix} v 1 = u 1 = − 1 2 1 0 . Then,
v ⃗ 2 = u ⃗ 2 − proj v ⃗ 1 ( u ⃗ 2 ) = u ⃗ 2 − ⟨ u ⃗ 2 , v ⃗ 1 ⟩ ∣ ∣ v ⃗ 1 ∣ ∣ 2 v ⃗ 1 = [ 1 − 1 0 1 ] − ⟨ [ 1 − 1 0 1 ] , [ − 1 2 1 0 ] ⟩ ∣ ∣ [ − 1 2 1 0 ] ∣ ∣ 2 [ − 1 2 1 0 ] = [ 1 − 1 0 1 ] − 1 ( − 1 ) + ( − 1 ) 2 + 0 ( 1 ) + 1 ( 0 ) ( − 1 ) 2 + 2 2 + 1 2 + 0 2 [ − 1 2 1 0 ] = [ 1 2 0 1 2 1 ] . \def<{\left\langle} \def>{\right\rangle}
\def\norm#1{\left|\left|#1\right|\right|}
\begin{align*}
\vector{v}_2 &= \vector{u}_2 - \operatorname{proj}_{\vector{v}_1}(\vector{u}_2) \\
&= \vector{u}_2
- \frac{<\vector{u}_2,\vector{v}_1>}{||\vector{v}_1||^2}\vector{v}_1 \\
&= \begin{bmatrix}
1 \\ -1 \\ 0 \\ 1
\end{bmatrix} - \frac{<\begin{bmatrix}
1 \\ -1 \\ 0 \\ 1
\end{bmatrix}, \begin{bmatrix}
-1 \\ 2 \\ 1 \\ 0
\end{bmatrix}>}{\norm{\begin{bmatrix}
-1 \\ 2 \\ 1 \\ 0
\end{bmatrix}}^2}\begin{bmatrix}
-1 \\ 2 \\ 1 \\ 0
\end{bmatrix} \\
&= \begin{bmatrix}
1 \\ -1 \\ 0 \\ 1
\end{bmatrix} - \frac{1(-1)+(-1)2+0(1)+1(0)}{(-1)^2 + 2^2 + 1^2 + 0^2}\begin{bmatrix}
-1 \\ 2 \\ 1 \\ 0
\end{bmatrix} \\
&= \begin{bmatrix}
\frac{1}{2} \\[.2em] 0 \\[.2em] \frac{1}{2} \\[.2em] 1
\end{bmatrix}
\end{align*}. v 2 = u 2 − proj v 1 ( u 2 ) = u 2 − ∣∣ v 1 ∣ ∣ 2 ⟨ u 2 , v 1 ⟩ v 1 = 1 − 1 0 1 − − 1 2 1 0 2 ⟨ 1 − 1 0 1 , − 1 2 1 0 ⟩ − 1 2 1 0 = 1 − 1 0 1 − ( − 1 ) 2 + 2 2 + 1 2 + 0 2 1 ( − 1 ) + ( − 1 ) 2 + 0 ( 1 ) + 1 ( 0 ) − 1 2 1 0 = 2 1 0 2 1 1 .
Hence, { v ⃗ 1 , v ⃗ 2 } \set{\vector{v}_1, \vector{v}_2} { v 1 , v 2 } is an orthogonal basis of ker ( A ) \ker(A) ker ( A ) . Finally, an orthonormal basis of ker ( A ) \ker(A) ker ( A ) is
{ 1 6 [ − 1 2 1 0 ] , 1 3 2 [ 1 2 0 1 2 1 ] } = { [ − 1 6 2 3 1 6 0 ] , [ 1 6 0 1 6 2 3 ] } . \Set{
\frac{1}{\sqrt{6}}\begin{bmatrix}
-1 \\ 2 \\ 1 \\ 0
\end{bmatrix},
\frac{1}{\sqrt{\frac{3}{2}}}\begin{bmatrix}
\frac{1}{2} \\[.5em] 0 \\[.5em] \frac{1}{2} \\[.5em] 1
\end{bmatrix}
} = \Set{
\begin{bmatrix}
-\frac{1}{\sqrt{6}} \\[.5em]
\sqrt{\frac{2}{3}} \\[.5em]
\frac{1}{\sqrt{6}} \\[.5em]
0
\end{bmatrix},
\begin{bmatrix}
\frac{1}{\sqrt{6}} \\[.5em]
0 \\
\frac{1}{\sqrt{6}} \\[.5em]
\sqrt{\frac{2}{3}}
\end{bmatrix}
}. ⎩ ⎨ ⎧ 6 1 − 1 2 1 0 , 2 3 1 2 1 0 2 1 1 ⎭ ⎬ ⎫ = ⎩ ⎨ ⎧ − 6 1 3 2 6 1 0 , 6 1 0 6 1 3 2 ⎭ ⎬ ⎫ .
(b) Find an orthonormal basis for ( ker ( A ) ) ⊥ (\ker(A))^\perp ( ker ( A ) ) ⊥ , the orthogonal complement of ker ( A ) \ker(A) ker ( A ) .
Since ( ker ( A ) ) ⊥ = Im ( A ⊤ ) = Im [ 2 3 1 2 0 − 1 − 1 − 1 ] (\ker(A))^\perp = \operatorname{Im}(A^\top) = \operatorname{Im}\begin{bmatrix}
2 & 3 \\
1 & 2 \\
0 & -1 \\
-1 & -1
\end{bmatrix} ( ker ( A ) ) ⊥ = Im ( A ⊤ ) = Im 2 1 0 − 1 3 2 − 1 − 1 .
rref ( A ⊤ ) = [ 1 0 0 1 0 0 0 0 ] ∴ Im ( A ⊤ ) = span { [ 2 1 0 − 1 ] , [ 3 2 − 1 − 1 ] } \operatorname{rref}(A^\top) = \begin{bmatrix}
1 & 0 \\
0 & 1 \\
0 & 0 \\
0 & 0
\end{bmatrix}
\\
\therefore \operatorname{Im}(A^\top) = \operatorname{span}\Set{
\begin{bmatrix}
2 \\ 1 \\ 0 \\ -1
\end{bmatrix},
\begin{bmatrix}
3 \\ 2 \\ -1 \\ -1
\end{bmatrix}
} rref ( A ⊤ ) = 1 0 0 0 0 1 0 0 ∴ Im ( A ⊤ ) = span ⎩ ⎨ ⎧ 2 1 0 − 1 , 3 2 − 1 − 1 ⎭ ⎬ ⎫
Let u ⃗ 1 = [ 2 1 0 − 1 ] , u ⃗ 2 = [ 3 2 − 1 − 1 ] \vector{u}_1 = \begin{bmatrix}
2 \\ 1 \\ 0 \\ -1
\end{bmatrix},
\vector{u}_2 = \begin{bmatrix}
3 \\ 2 \\ -1 \\ -1
\end{bmatrix} u 1 = 2 1 0 − 1 , u 2 = 3 2 − 1 − 1 . { u ⃗ 1 , u ⃗ 2 } \set{\vector{u}_1, \vector{u}_2} { u 1 , u 2 } is a basis of Im ( A ⊤ ) \operatorname{Im}(A^\top) Im ( A ⊤ ) . Then, we apply Gram–Schmidt to orthogonalize them.
Let v ⃗ 1 = u ⃗ 1 = [ 2 1 0 − 1 ] \vector{v}_1 = \vector{u}_1 = \begin{bmatrix}
2 \\ 1 \\ 0 \\ -1
\end{bmatrix} v 1 = u 1 = 2 1 0 − 1 . Then,
v ⃗ 2 = u ⃗ 2 − proj v ⃗ 1 ( u ⃗ 2 ) = u ⃗ 2 − ⟨ u ⃗ 2 , v ⃗ 1 ⟩ ∣ ∣ v ⃗ 1 ∣ ∣ 2 v ⃗ 1 = [ 3 2 − 1 − 1 ] − ⟨ [ 3 2 − 1 − 1 ] , [ 2 1 0 − 1 ] ⟩ ∣ ∣ [ 2 1 0 − 1 ] ∣ ∣ 2 [ 2 1 0 − 1 ] = [ 3 2 − 1 − 1 ] − 3 ( 2 ) + 2 ( 1 ) + ( − 1 ) ( 0 ) + ( − 1 ) ( − 1 ) 2 2 + 1 2 + 0 2 + ( − 1 ) 2 [ 2 1 0 − 1 ] = [ 0 1 2 − 1 1 2 ] . \def<{\left\langle} \def>{\right\rangle}
\def\norm#1{\left|\left|#1\right|\right|}
\begin{align*}
\vector{v}_2 &= \vector{u}_2 - \operatorname{proj}_{\vector{v}_1}(\vector{u}_2) \\
&= \vector{u}_2
- \frac{<\vector{u}_2,\vector{v}_1>}{||\vector{v}_1||^2}\vector{v}_1 \\
&= \begin{bmatrix}
3 \\ 2 \\ -1 \\ -1
\end{bmatrix}
- \frac{<\begin{bmatrix}
3 \\ 2 \\ -1 \\ -1
\end{bmatrix}, \begin{bmatrix}
2 \\ 1 \\ 0 \\ -1
\end{bmatrix}>}{\norm{\begin{bmatrix}
2 \\ 1 \\ 0 \\ -1
\end{bmatrix}}^2}\begin{bmatrix}
2 \\ 1 \\ 0 \\ -1
\end{bmatrix} \\
&= \begin{bmatrix}
3 \\ 2 \\ -1 \\ -1
\end{bmatrix}
- \frac{3(2)+2(1)+(-1)(0)+(-1)(-1)}{2^2+1^2+0^2+(-1)^2}\begin{bmatrix}
2 \\ 1 \\ 0 \\ -1
\end{bmatrix} \\
&= \begin{bmatrix}
0 \\ \frac{1}{2} \\ -1 \\ \frac{1}{2}
\end{bmatrix}.
\end{align*} v 2 = u 2 − proj v 1 ( u 2 ) = u 2 − ∣∣ v 1 ∣ ∣ 2 ⟨ u 2 , v 1 ⟩ v 1 = 3 2 − 1 − 1 − 2 1 0 − 1 2 ⟨ 3 2 − 1 − 1 , 2 1 0 − 1 ⟩ 2 1 0 − 1 = 3 2 − 1 − 1 − 2 2 + 1 2 + 0 2 + ( − 1 ) 2 3 ( 2 ) + 2 ( 1 ) + ( − 1 ) ( 0 ) + ( − 1 ) ( − 1 ) 2 1 0 − 1 = 0 2 1 − 1 2 1 .
Then, { v ⃗ 1 , v ⃗ 2 } \set{\vector{v}_1, \vector{v}_2} { v 1 , v 2 } is an orthogonal basis of ( ker ( A ) ) ⊥ (\ker(A))^\perp ( ker ( A ) ) ⊥ . Finally, an orthonormal basis of ( ker ( A ) ) ⊥ (\ker(A))^\perp ( ker ( A ) ) ⊥ is
{ 1 6 [ 2 1 0 − 1 ] , 1 3 2 [ 0 1 2 − 1 1 2 ] } = { [ 2 3 1 6 0 − 1 6 ] , [ 0 1 6 − 2 3 1 6 ] } . \Set{
\frac{1}{\sqrt{6}}\begin{bmatrix}
2 \\ 1 \\ 0 \\ -1
\end{bmatrix},
\frac{1}{\sqrt{\frac{3}{2}}}\begin{bmatrix}
0 \\ \frac{1}{2} \\ -1 \\ \frac{1}{2}
\end{bmatrix}
} = \Set{
\begin{bmatrix}
\sqrt{\frac{2}{3}} \\ \frac{1}{\sqrt{6}} \\ 0 \\ -\frac{1}{\sqrt{6}}
\end{bmatrix},
\begin{bmatrix}
0 \\ \frac{1}{\sqrt{6}} \\ -\sqrt{\frac{2}{3}} \\ \frac{1}{\sqrt{6}}
\end{bmatrix}
}. ⎩ ⎨ ⎧ 6 1 2 1 0 − 1 , 2 3 1 0 2 1 − 1 2 1 ⎭ ⎬ ⎫ = ⎩ ⎨ ⎧ 3 2 6 1 0 − 6 1 , 0 6 1 − 3 2 6 1 ⎭ ⎬ ⎫ .
(c) Does the orthonormal basis in (i) combined with the orthonormal basis in (ii)for an orthonormal basis for R 4 \R^4 R 4 ? Explain.
The union of the orthonormal bases for A A A and A ⊤ A^\top A ⊤ found in parts (i) and (ii) is
{ 1 6 [ − 1 2 1 0 ] , 1 3 2 [ 1 2 0 1 2 1 ] , 1 6 [ 2 1 0 − 1 ] , 1 3 2 [ 0 1 2 − 1 1 2 ] } . \Set{
\frac{1}{\sqrt{6}}\begin{bmatrix}
-1 \\ 2 \\ 1 \\ 0
\end{bmatrix},
\frac{1}{\sqrt{\frac{3}{2}}}\begin{bmatrix}
\frac{1}{2} \\[.5em] 0 \\[.5em] \frac{1}{2} \\[.5em] 1
\end{bmatrix},
\frac{1}{\sqrt{6}}\begin{bmatrix}
2 \\ 1 \\ 0 \\ -1
\end{bmatrix},
\frac{1}{\sqrt{\frac{3}{2}}}\begin{bmatrix}
0 \\ \frac{1}{2} \\ -1 \\ \frac{1}{2}
\end{bmatrix}
}. ⎩ ⎨ ⎧ 6 1 − 1 2 1 0 , 2 3 1 2 1 0 2 1 1 , 6 1 2 1 0 − 1 , 2 3 1 0 2 1 − 1 2 1 ⎭ ⎬ ⎫ .
Placing them as column vectors in a matrix:
rref [ − 1 6 1 6 2 3 0 2 3 0 1 6 1 6 1 6 1 6 0 − 2 3 0 2 3 − 1 6 1 6 ] = [ 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ] \operatorname{rref}\begin{bmatrix}
-\frac{1}{\sqrt{6}} & \frac{1}{\sqrt{6}}& \sqrt{\frac{2}{3}} & 0 \\
\sqrt{\frac{2}{3}} & 0 & \frac{1}{\sqrt{6}} & \frac{1}{\sqrt{6}} \\
\frac{1}{\sqrt{6}} & \frac{1}{\sqrt{6}} & 0 & -\sqrt{\frac{2}{3}} \\
0 & \sqrt{\frac{2}{3}} & -\frac{1}{\sqrt{6}} & \frac{1}{\sqrt{6}}
\end{bmatrix} = \begin{bmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{bmatrix} rref − 6 1 3 2 6 1 0 6 1 0 6 1 3 2 3 2 6 1 0 − 6 1 0 6 1 − 3 2 6 1 = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1
We find that the reduced-row echelon form is full rank. As such, these vectors span R 4 \R^4 R 4 .
Then, we need to check if they are mutually orthogonal to determine if they are an orthonormal basis of R 4 \R^4 R 4 . By checking all pairs in the set , we find that they are orthogonal.
As such, the abovementioned set is an orthonormal basis of R 4 \R^4 R 4 .
5. Consider the following subspace of R 4 \R^4 R 4 V = span { [ 1 1 1 1 ] , [ 1 0 0 1 ] , [ 0 2 1 − 1 ] } V = \operatorname{span}\Set{ \begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \end{bmatrix}, \begin{bmatrix} 1 \\ 0 \\ 0 \\ 1 \end{bmatrix}, \begin{bmatrix} 0 \\ 2 \\ 1 \\ -1 \end{bmatrix}} V = span ⎩ ⎨ ⎧ 1 1 1 1 , 1 0 0 1 , 0 2 1 − 1 ⎭ ⎬ ⎫
(i) What is the dimension of V V V ?
rref [ 1 1 0 1 0 2 1 0 1 1 1 − 1 ] = [ 1 0 0 0 1 0 0 0 1 0 0 0 ] \operatorname{rref}\begin{bmatrix}
1 & 1 & 0 \\
1 & 0 & 2 \\
1 & 0 & 1 \\
1 & 1 & -1
\end{bmatrix} = \begin{bmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1 \\
0 & 0 & 0
\end{bmatrix} rref 1 1 1 1 1 0 0 1 0 2 1 − 1 = 1 0 0 0 0 1 0 0 0 0 1 0
A matrix composed of the three vectors has a full column rank. As such, they form a basis of V V V and thus dim V = 3 \dim V = 3 dim V = 3 .
(ii) Using Gram-Schmidt Process, find an orthogonal basis for V V V .
Let u ⃗ 1 = [ 1 1 1 1 ] \vector{u}_1 = \begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \end{bmatrix} u 1 = 1 1 1 1 , u ⃗ 2 = [ 1 0 0 1 ] \vector{u}_2 = \begin{bmatrix} 1 \\ 0 \\ 0 \\ 1 \end{bmatrix} u 2 = 1 0 0 1 , and u ⃗ 3 = [ 0 2 1 − 1 ] \vector{u}_3 = \begin{bmatrix} 0 \\ 2 \\ 1 \\ -1 \end{bmatrix} u 3 = 0 2 1 − 1 . As shown in (i), { u ⃗ 1 , u ⃗ 2 , u ⃗ 3 } \set{\vector{u}_1, \vector{u}_2, \vector{u}_3} { u 1 , u 2 , u 3 } is a basis of V V V .
Now let v ⃗ 1 = u ⃗ 1 = [ 1 1 1 1 ] \vector{v}_1 = \vector{u}_1 = \begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \end{bmatrix} v 1 = u 1 = 1 1 1 1 . Then, applying Gram–Schmidt:
v ⃗ 2 = u ⃗ 2 − proj v ⃗ 1 ( u ⃗ 2 ) = u ⃗ 2 − ⟨ u ⃗ 2 , v ⃗ 1 ⟩ ∣ ∣ v ⃗ 1 ∣ ∣ 2 v ⃗ 1 = [ 1 0 0 1 ] − ⟨ [ 1 0 0 1 ] , [ 1 1 1 1 ] ⟩ ∣ ∣ [ 1 1 1 1 ] ∣ ∣ 2 [ 1 1 1 1 ] = [ 1 0 0 1 ] − 1 ( 1 ) + 0 ( 1 ) + 0 ( 1 ) + 0 ( 1 ) + 1 ( 1 ) 1 2 + 1 2 + 1 2 + 1 2 [ 1 1 1 1 ] = [ 1 2 − 1 2 − 1 2 1 2 ] v ⃗ 3 = u ⃗ 3 − proj v ⃗ 1 ( u ⃗ 3 ) − proj v ⃗ 2 ( u ⃗ 3 ) = u ⃗ 3 − ⟨ u ⃗ 3 , v ⃗ 1 ⟩ ∣ ∣ v ⃗ 1 ∣ ∣ 2 v ⃗ 1 − ⟨ u ⃗ 3 , v ⃗ 2 ⟩ ∣ ∣ v ⃗ 2 ∣ ∣ 2 v ⃗ 2 = [ 0 2 1 − 1 ] − ⟨ [ 0 2 1 − 1 ] , [ 1 1 1 1 ] ⟩ ∣ ∣ [ 1 1 1 1 ] ∣ ∣ 2 [ 1 1 1 1 ] − ⟨ [ 0 2 1 − 1 ] , [ 1 2 − 1 2 − 1 2 1 2 ] ⟩ ∣ ∣ [ 1 2 − 1 2 − 1 2 1 2 ] ∣ ∣ 2 [ 1 2 − 1 2 − 1 2 1 2 ] = [ 0 2 1 − 1 ] − 0 ( 1 ) + 2 ( 1 ) + 1 ( 1 ) + ( − 1 ) ( 1 ) 1 2 + 1 2 + 1 2 + 1 2 [ 1 1 1 1 ] − 0 ( 1 2 ) + 2 ( − 1 2 ) + 1 ( − 1 2 ) + ( − 1 ) ( 1 2 ) ( 1 2 ) 2 + ( − 1 2 ) 2 + ( − 1 2 ) 2 + ( 1 2 ) 2 [ 1 2 − 1 2 − 1 2 1 2 ] = [ 1 2 1 2 − 1 2 − 1 2 ] \def<{\left\langle} \def>{\right\rangle}
\def\norm#1{\left|\left|#1\right|\right|}
\begin{align*}
\vector{v}_2 &= \vector{u}_2 - \operatorname{proj}_{\vector{v}_1}(\vector{u}_2) \\
&= \vector{u}_2
- \frac{<\vector{u}_2,\vector{v}_1>}{||\vector{v}_1||^2}\vector{v}_1 \\
&= \begin{bmatrix} 1 \\ 0 \\ 0 \\ 1 \end{bmatrix}
- \frac{<\begin{bmatrix} 1 \\ 0 \\ 0 \\ 1 \end{bmatrix},\begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \end{bmatrix}>}{\norm{\begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \end{bmatrix}}^2}\begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \end{bmatrix} \\
&= \begin{bmatrix} 1 \\ 0 \\ 0 \\ 1 \end{bmatrix}
- \frac{1(1)+0(1)+0(1)+0(1)+1(1)}{1^2+1^2+1^2+1^2}\begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \end{bmatrix} \\
&= \begin{bmatrix}
\frac{1}{2} \\[.3em] -\frac{1}{2} \\[.3em] -\frac{1}{2} \\[.3em] \frac{1}{2}
\end{bmatrix}
\\[3em]
\vector{v}_3 &= \vector{u}_3
- \operatorname{proj}_{\vector{v}_1}(\vector{u}_3)
- \operatorname{proj}_{\vector{v}_2}(\vector{u}_3) \\
&= \vector{u}_3
- \frac{<\vector{u}_3, \vector{v}_1>}{\norm{\vector{v}_1}^2}\vector{v}_1
- \frac{<\vector{u}_3, \vector{v}_2>}{\norm{\vector{v}_2}^2}\vector{v}_2 \\
&= \begin{bmatrix} 0 \\ 2 \\ 1 \\ -1 \end{bmatrix}
- \frac{<\begin{bmatrix} 0 \\ 2 \\ 1 \\ -1 \end{bmatrix}, \begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \end{bmatrix}>}{\norm{\begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \end{bmatrix}}^2}\begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \end{bmatrix}
- \frac{<\begin{bmatrix} 0 \\ 2 \\ 1 \\ -1 \end{bmatrix}, \begin{bmatrix}
\frac{1}{2} \\[.3em] -\frac{1}{2} \\[.3em] -\frac{1}{2} \\[.3em] \frac{1}{2}
\end{bmatrix}>}{\norm{\begin{bmatrix}
\frac{1}{2} \\[.3em] -\frac{1}{2} \\[.3em] -\frac{1}{2} \\[.3em] \frac{1}{2}
\end{bmatrix}}^2}\begin{bmatrix}
\frac{1}{2} \\[.3em] -\frac{1}{2} \\[.3em] -\frac{1}{2} \\[.3em] \frac{1}{2}
\end{bmatrix} \\
&= \begin{bmatrix} 0 \\ 2 \\ 1 \\ -1 \end{bmatrix}
- \frac{0(1)+2(1)+1(1)+(-1)(1)}{1^2+1^2+1^2+1^2}\begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \end{bmatrix}
- \frac{0(\frac{1}{2})+2(-\frac{1}{2})+1(-\frac{1}{2})+(-1)(\frac{1}{2})}{(\frac{1}{2})^2 + (-\frac{1}{2})^2 + (-\frac{1}{2})^2 + (\frac{1}{2})^2}\begin{bmatrix}
\frac{1}{2} \\[.3em] -\frac{1}{2} \\[.3em] -\frac{1}{2} \\[.3em] \frac{1}{2}
\end{bmatrix} \\
&= \begin{bmatrix}
\frac{1}{2} \\[.3em] \frac{1}{2} \\[.3em] -\frac{1}{2} \\[.3em] -\frac{1}{2}
\end{bmatrix}
\end{align*} v 2 v 3 = u 2 − proj v 1 ( u 2 ) = u 2 − ∣∣ v 1 ∣ ∣ 2 ⟨ u 2 , v 1 ⟩ v 1 = 1 0 0 1 − 1 1 1 1 2 ⟨ 1 0 0 1 , 1 1 1 1 ⟩ 1 1 1 1 = 1 0 0 1 − 1 2 + 1 2 + 1 2 + 1 2 1 ( 1 ) + 0 ( 1 ) + 0 ( 1 ) + 0 ( 1 ) + 1 ( 1 ) 1 1 1 1 = 2 1 − 2 1 − 2 1 2 1 = u 3 − proj v 1 ( u 3 ) − proj v 2 ( u 3 ) = u 3 − ∣ ∣ v 1 ∣ ∣ 2 ⟨ u 3 , v 1 ⟩ v 1 − ∣ ∣ v 2 ∣ ∣ 2 ⟨ u 3 , v 2 ⟩ v 2 = 0 2 1 − 1 − 1 1 1 1 2 ⟨ 0 2 1 − 1 , 1 1 1 1 ⟩ 1 1 1 1 − 2 1 − 2 1 − 2 1 2 1 2 ⟨ 0 2 1 − 1 , 2 1 − 2 1 − 2 1 2 1 ⟩ 2 1 − 2 1 − 2 1 2 1 = 0 2 1 − 1 − 1 2 + 1 2 + 1 2 + 1 2 0 ( 1 ) + 2 ( 1 ) + 1 ( 1 ) + ( − 1 ) ( 1 ) 1 1 1 1 − ( 2 1 ) 2 + ( − 2 1 ) 2 + ( − 2 1 ) 2 + ( 2 1 ) 2 0 ( 2 1 ) + 2 ( − 2 1 ) + 1 ( − 2 1 ) + ( − 1 ) ( 2 1 ) 2 1 − 2 1 − 2 1 2 1 = 2 1 2 1 − 2 1 − 2 1
And so, an orthogonal basis of V V V is { [ 1 1 1 1 ] , [ 1 2 − 1 2 − 1 2 1 2 ] , [ 1 2 1 2 − 1 2 − 1 2 ] } \Set{\begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \end{bmatrix}, \begin{bmatrix}
\frac{1}{2} \\[.3em] -\frac{1}{2} \\[.3em] -\frac{1}{2} \\[.3em] \frac{1}{2}
\end{bmatrix},\begin{bmatrix}
\frac{1}{2} \\[.3em] \frac{1}{2} \\[.3em] -\frac{1}{2} \\[.3em] -\frac{1}{2}
\end{bmatrix}
} ⎩ ⎨ ⎧ 1 1 1 1 , 2 1 − 2 1 − 2 1 2 1 , 2 1 2 1 − 2 1 − 2 1 ⎭ ⎬ ⎫ .
(iii) Find the orthogonal projection of [ 1 2 3 4 ] \begin{bmatrix} 1 \\ 2 \\ 3 \\ 4\end{bmatrix} 1 2 3 4 to V V V .
From (ii), { [ 1 1 1 1 ] , [ 1 2 − 1 2 − 1 2 1 2 ] , [ 1 2 1 2 − 1 2 − 1 2 ] } \Set{\begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \end{bmatrix}, \begin{bmatrix}
\frac{1}{2} \\[.3em] -\frac{1}{2} \\[.3em] -\frac{1}{2} \\[.3em] \frac{1}{2}
\end{bmatrix},\begin{bmatrix}
\frac{1}{2} \\[.3em] \frac{1}{2} \\[.3em] -\frac{1}{2} \\[.3em] -\frac{1}{2}
\end{bmatrix}
} ⎩ ⎨ ⎧ 1 1 1 1 , 2 1 − 2 1 − 2 1 2 1 , 2 1 2 1 − 2 1 − 2 1 ⎭ ⎬ ⎫ is an orthogonal basis of V V V . As such,
proj V [ 1 2 3 4 ] = ⟨ [ 1 2 3 4 ] , [ 1 1 1 1 ] ⟩ ∣ ∣ [ 1 1 1 1 ] ∣ ∣ 2 [ 1 1 1 1 ] + ⟨ [ 1 2 3 4 ] , [ 1 2 − 1 2 − 1 2 1 2 ] ⟩ ∣ ∣ [ 1 2 − 1 2 − 1 2 1 2 ] ∣ ∣ 2 [ 1 2 − 1 2 − 1 2 1 2 ] + ⟨ [ 1 2 3 4 ] , [ 1 2 1 2 − 1 2 − 1 2 ] ⟩ ∣ ∣ [ 1 2 1 2 − 1 2 − 1 2 ] ∣ ∣ 2 [ 1 2 1 2 − 1 2 − 1 2 ] = 1 ( 1 ) + 2 ( 1 ) + 3 ( 1 ) + 4 ( 1 ) 1 2 + 1 2 + 1 2 + 1 2 [ 1 1 1 1 ] + 0 [ 1 2 − 1 2 − 1 2 1 2 ] + 1 ( 1 2 ) + 2 ( 1 2 ) + 3 ( − 1 2 ) + 4 ( − 1 2 ) ( 1 2 ) 2 + ( 1 2 ) 2 + ( − 1 2 ) 2 + ( − 1 2 ) 2 [ 1 2 1 2 − 1 2 − 1 2 ] = 1 2 [ 3 3 7 7 ] . \def<{\left\langle} \def>{\right\rangle}
\def\norm#1{\left|\left|#1\right|\right|}
\begin{align*}
\operatorname{proj}_V \begin{bmatrix}
1 \\ 2 \\ 3 \\ 4
\end{bmatrix}
&= \frac{<\begin{bmatrix}
1 \\ 2 \\ 3 \\ 4
\end{bmatrix}, \begin{bmatrix}
1 \\ 1 \\ 1 \\ 1
\end{bmatrix}>}{\norm{\begin{bmatrix}
1 \\ 1 \\ 1 \\ 1
\end{bmatrix}}^2}\begin{bmatrix}
1 \\ 1 \\ 1 \\ 1
\end{bmatrix}
+ \frac{<\begin{bmatrix}
1 \\ 2 \\ 3 \\ 4
\end{bmatrix}, \begin{bmatrix}
\frac{1}{2} \\[.3em] -\frac{1}{2} \\[.3em] -\frac{1}{2} \\[.3em] \frac{1}{2}
\end{bmatrix}>}{\norm{\begin{bmatrix}
\frac{1}{2} \\[.3em] -\frac{1}{2} \\[.3em] -\frac{1}{2} \\[.3em] \frac{1}{2}
\end{bmatrix}}^2}\begin{bmatrix}
\frac{1}{2} \\[.3em] -\frac{1}{2} \\[.3em] -\frac{1}{2} \\[.3em] \frac{1}{2}
\end{bmatrix}
+ \frac{<\begin{bmatrix}
1 \\ 2 \\ 3 \\ 4
\end{bmatrix}, \begin{bmatrix}
\frac{1}{2} \\[.3em] \frac{1}{2} \\[.3em] -\frac{1}{2} \\[.3em] -\frac{1}{2}
\end{bmatrix}>}{\norm{\begin{bmatrix}
\frac{1}{2} \\[.3em] \frac{1}{2} \\[.3em] -\frac{1}{2} \\[.3em] -\frac{1}{2}
\end{bmatrix}}^2}\begin{bmatrix}
\frac{1}{2} \\[.3em] \frac{1}{2} \\[.3em] -\frac{1}{2} \\[.3em] -\frac{1}{2}
\end{bmatrix} \\
&= \frac{1(1)+2(1)+3(1)+4(1)}{1^2+1^2+1^2+1^2}\begin{bmatrix}
1 \\ 1 \\ 1 \\ 1
\end{bmatrix}
+ 0 \begin{bmatrix}
\frac{1}{2} \\[.3em] -\frac{1}{2} \\[.3em] -\frac{1}{2} \\[.3em] \frac{1}{2}
\end{bmatrix}
+ \frac{1(\frac{1}{2})+2(\frac{1}{2})+3(-\frac{1}{2})+4(-\frac{1}{2})}{(\frac{1}{2})^2+(\frac{1}{2})^2+(-\frac{1}{2})^2+(-\frac{1}{2})^2}\begin{bmatrix}
\frac{1}{2} \\[.3em] \frac{1}{2} \\[.3em] -\frac{1}{2} \\[.3em] -\frac{1}{2}
\end{bmatrix}
\\
&= \frac{1}{2}\begin{bmatrix}
3 \\ 3 \\ 7 \\ 7
\end{bmatrix}.
\end{align*} proj V 1 2 3 4 = 1 1 1 1 2 ⟨ 1 2 3 4 , 1 1 1 1 ⟩ 1 1 1 1 + 2 1 − 2 1 − 2 1 2 1 2 ⟨ 1 2 3 4 , 2 1 − 2 1 − 2 1 2 1 ⟩ 2 1 − 2 1 − 2 1 2 1 + 2 1 2 1 − 2 1 − 2 1 2 ⟨ 1 2 3 4 , 2 1 2 1 − 2 1 − 2 1 ⟩ 2 1 2 1 − 2 1 − 2 1 = 1 2 + 1 2 + 1 2 + 1 2 1 ( 1 ) + 2 ( 1 ) + 3 ( 1 ) + 4 ( 1 ) 1 1 1 1 + 0 2 1 − 2 1 − 2 1 2 1 + ( 2 1 ) 2 + ( 2 1 ) 2 + ( − 2 1 ) 2 + ( − 2 1 ) 2 1 ( 2 1 ) + 2 ( 2 1 ) + 3 ( − 2 1 ) + 4 ( − 2 1 ) 2 1 2 1 − 2 1 − 2 1 = 2 1 3 3 7 7 .
6.
(i) Find the least square solution of the following system [ 2 3 4 − 2 1 5 2 0 ] x = [ 2 − 1 1 3 ] \begin{bmatrix}2 & 3 \\4 & −2 \\1 & 5 \\2 & 0\end{bmatrix}\mathbf{x} =\begin{bmatrix}2 \\−1 \\1 \\3\end{bmatrix} 2 4 1 2 3 − 2 5 0 x = 2 − 1 1 3
Let A = [ 2 3 4 − 2 1 5 2 0 ] A = \begin{bmatrix}2 & 3 \\4 & −2 \\1 & 5 \\2 & 0\end{bmatrix} A = 2 4 1 2 3 − 2 5 0 and b ⃗ = [ 2 − 1 1 3 ] \vector{b}=\begin{bmatrix}2 \\−1 \\1 \\3\end{bmatrix} b = 2 − 1 1 3 . Then, A ⊤ = [ 2 4 1 2 3 − 2 5 0 ] A^\top = \begin{bmatrix}
2 & 4 & 1 & 2 \\
3 & -2 & 5 & 0
\end{bmatrix} A ⊤ = [ 2 3 4 − 2 1 5 2 0 ] .
Since A ⊤ A = [ 2 4 1 2 3 − 2 5 0 ] [ 2 3 4 − 2 1 5 2 0 ] = [ 25 3 3 38 ] A^\top A = \begin{bmatrix}
2 & 4 & 1 & 2 \\
3 & -2 & 5 & 0
\end{bmatrix}\begin{bmatrix}2 & 3 \\4 & −2 \\1 & 5 \\2 & 0\end{bmatrix} = \begin{bmatrix}
25 & 3 \\ 3 & 38
\end{bmatrix} A ⊤ A = [ 2 3 4 − 2 1 5 2 0 ] 2 4 1 2 3 − 2 5 0 = [ 25 3 3 38 ] and its inverse exists. Then, the least square solution x ^ \mathbf{\hat{x}} x ^ can be derived by applying A ⊤ A^\top A ⊤ to both sides.
A ⊤ A x ^ = A ⊤ b ⃗ ∴ x ^ = ( A ⊤ A ) − 1 A ⊤ b ⃗ = [ 25 3 3 38 ] − 1 [ 2 4 1 2 3 − 2 5 0 ] [ 2 − 1 1 3 ] = 1 941 [ 227 304 ] A^\top A\mathbf{\hat{x}} = A^\top\vector{b} \\[1em]
\begin{align*}
\therefore \mathbf{\hat{x}} &= (A^\top A)^{-1} A^\top\vector{b} \\
&= \begin{bmatrix}
25 & 3 \\ 3 & 38
\end{bmatrix}^{-1}\begin{bmatrix}
2 & 4 & 1 & 2 \\
3 & -2 & 5 & 0
\end{bmatrix}
\begin{bmatrix}2 \\−1 \\1 \\3\end{bmatrix} \\
&= \frac{1}{941}\begin{bmatrix}
227 \\ 304
\end{bmatrix}
\end{align*} A ⊤ A x ^ = A ⊤ b ∴ x ^ = ( A ⊤ A ) − 1 A ⊤ b = [ 25 3 3 38 ] − 1 [ 2 3 4 − 2 1 5 2 0 ] 2 − 1 1 3 = 941 1 [ 227 304 ]
(ii) Find the orthogonal projection of b b b onto the image of A A A using the least square solution.
From (i), where A = [ 2 3 4 − 2 1 5 2 0 ] A = \begin{bmatrix}2 & 3 \\4 & −2 \\1 & 5 \\2 & 0\end{bmatrix} A = 2 4 1 2 3 − 2 5 0 and x ^ = 1 941 [ 227 304 ] \mathbf{\hat{x}}= \displaystyle\frac{1}{941}\begin{bmatrix} 227 \\ 304 \end{bmatrix} x ^ = 941 1 [ 227 304 ] . Then,
b ⃗ = A x ^ = [ 2 3 4 − 2 1 5 2 0 ] 1 941 [ 227 304 ] = 1 941 [ 1366 300 1747 454 ] . \vector{b} = A\mathbf{\hat{x}} = \begin{bmatrix}
2 & 3 \\ 4 & −2 \\ 1 & 5 \\2 & 0
\end{bmatrix}
\frac{1}{941} \begin{bmatrix}
227 \\ 304
\end{bmatrix}
= \frac{1}{941} \begin{bmatrix}
1366 \\ 300 \\ 1747 \\ 454
\end{bmatrix}. b = A x ^ = 2 4 1 2 3 − 2 5 0 941 1 [ 227 304 ] = 941 1 1366 300 1747 454 .
7. Find the least square fitting straight line y = C + D t y = C + Dt y = C + D t given the following set of data. t i − 2 0 1 3 y i 0 1 2 5 \begin{array}{|c|c|c|c|c|}\hline t_i & -2 & 0 & 1 & 3 \\\hline y_i & 0 & 1 & 2 & 5\\\hline\end{array} t i y i − 2 0 0 1 1 2 3 5
Using the equation of a straight line, we have the following system of equation:
{ 0 = C + D ( − 2 ) 1 = C + D ( 0 ) 2 = C + D ( 1 ) 5 = C + D ( 3 ) ⟺ [ 0 1 2 5 ] = [ 1 − 2 1 0 1 1 1 3 ] [ C D ] \begin{cases}
0 &= C+D(-2) \\
1 &= C+D(0) \\
2 &= C+D(1) \\
5 &= C+D(3)
\end{cases} \iff
\begin{bmatrix}
0 \\ 1 \\ 2 \\ 5
\end{bmatrix} = \begin{bmatrix}
1 & -2 \\
1 & 0 \\
1 & 1 \\
1 & 3
\end{bmatrix}\begin{bmatrix}
C \\ D
\end{bmatrix} ⎩ ⎨ ⎧ 0 1 2 5 = C + D ( − 2 ) = C + D ( 0 ) = C + D ( 1 ) = C + D ( 3 ) ⟺ 0 1 2 5 = 1 1 1 1 − 2 0 1 3 [ C D ]
Let b ⃗ = [ 0 1 2 5 ] \vector{b} = \begin{bmatrix}
0 \\ 1 \\ 2 \\ 5
\end{bmatrix} b = 0 1 2 5 , A = [ 1 − 2 1 0 1 1 1 3 ] A = \begin{bmatrix}
1 & -2 \\
1 & 0 \\
1 & 1 \\
1 & 3
\end{bmatrix} A = 1 1 1 1 − 2 0 1 3 , and x ⃗ = [ C D ] \vector{x} = \begin{bmatrix}
C \\ D
\end{bmatrix} x = [ C D ] . Here, we want to find x ⃗ \vector{x} x such that A x ⃗ = b ⃗ A\vector{x} = \vector{b} A x = b , will produce an inconsistent solution. Instead, we find a least square solution for x ^ = [ C ^ D ^ ] \mathbf{\hat{x}}=\begin{bmatrix}
\hat{C} \\ \hat{D}
\end{bmatrix} x ^ = [ C ^ D ^ ] by applying A ⊤ A^\top A ⊤ to both sides, such that
A ⊤ A x ^ = A ⊤ b ⃗ . A^\top A\mathbf{\hat{x}} = A^\top\vector{b}. A ⊤ A x ^ = A ⊤ b .
As such, we have:
[ 1 1 1 1 − 2 0 1 3 ] [ 1 − 2 1 0 1 1 1 3 ] [ C ^ D ^ ] = [ 1 1 1 1 − 2 0 1 3 ] [ 0 1 2 5 ] [ 4 2 2 14 ] [ C ^ D ^ ] = [ 8 17 ] ∴ [ C ^ D ^ ] = [ 4 2 2 14 ] − 1 [ 8 17 ] = [ 3 2 1 ] \begin{bmatrix}
1 & 1 & 1 & 1 \\
-2 & 0 & 1 & 3
\end{bmatrix}
\begin{bmatrix}
1 & -2 \\
1 & 0 \\
1 & 1 \\
1 & 3
\end{bmatrix}
\begin{bmatrix}
\hat{C} \\ \hat{D}
\end{bmatrix}
= \begin{bmatrix}
1 & 1 & 1 & 1 \\
-2 & 0 & 1 & 3
\end{bmatrix}
\begin{bmatrix}
0 \\ 1 \\ 2 \\ 5
\end{bmatrix}
\\
\begin{bmatrix}
4 & 2 \\
2 & 14
\end{bmatrix}
\begin{bmatrix}
\hat{C} \\ \hat{D}
\end{bmatrix}
= \begin{bmatrix}
8 \\ 17
\end{bmatrix}
\\
\therefore \begin{bmatrix}
\hat{C} \\ \hat{D}
\end{bmatrix} =
\begin{bmatrix}
4 & 2 \\
2 & 14
\end{bmatrix}^{-1}
\begin{bmatrix}
8 \\ 17
\end{bmatrix}
= \begin{bmatrix}
\frac{3}{2} \\[.3em] 1
\end{bmatrix} [ 1 − 2 1 0 1 1 1 3 ] 1 1 1 1 − 2 0 1 3 [ C ^ D ^ ] = [ 1 − 2 1 0 1 1 1 3 ] 0 1 2 5 [ 4 2 2 14 ] [ C ^ D ^ ] = [ 8 17 ] ∴ [ C ^ D ^ ] = [ 4 2 2 14 ] − 1 [ 8 17 ] = [ 2 3 1 ]
Therefore, our line of best fit is given by the equation y = 3 2 + t y=\frac{3}{2}+t y = 2 3 + t .
8. Consider the non-standard inner product on R 2 \R^2 R 2 . ⟨ u , v ⟩ = [ u 1 u 2 ] [ 1 2 2 5 ] [ v 1 v 2 ] \langle\mathbf{u},\mathbf{v}\rangle = \begin{bmatrix} u_1 & u_2 \end{bmatrix}\begin{bmatrix} 1 & 2 \\ 2 & 5\end{bmatrix} \begin{bmatrix} v_1 \\ v_2\end{bmatrix} ⟨ u , v ⟩ = [ u 1 u 2 ] [ 1 2 2 5 ] [ v 1 v 2 ]
(a) Verify this is an inner product of R 2 \R^2 R 2 .
First, notice that this definition results in a 1 × 1 1\times1 1 × 1 matrix. For u ⃗ , v ⃗ ∈ R 2 \vector{u},\vector{v}\in\R^2 u , v ∈ R 2 ,
⟨ u ⃗ , v ⃗ ⟩ = [ u 1 u 2 ] [ 1 2 2 5 ] [ v 1 v 2 ] = [ v 1 ( u 1 + 2 u 2 ) + v 2 ( 2 u 1 + 5 u 2 ) ] \begin{align*}
\langle\vector{u},\vector{v}\rangle &= \begin{bmatrix}
u_1 & u_2
\end{bmatrix}\begin{bmatrix}
1 & 2 \\ 2 & 5
\end{bmatrix} \begin{bmatrix}
v_1 \\ v_2
\end{bmatrix} \\
&= \begin{bmatrix}v_1(u_1+2u_2) + v_2(2u_1+5u_2)\end{bmatrix}
\end{align*} ⟨ u , v ⟩ = [ u 1 u 2 ] [ 1 2 2 5 ] [ v 1 v 2 ] = [ v 1 ( u 1 + 2 u 2 ) + v 2 ( 2 u 1 + 5 u 2 ) ]
Symmetry and bilinearity should be quiet obvious since we can just apply commutative, associative, and distributive properties of addition and multiplication here.
But for the sake of completion, consider u ⃗ , v ⃗ , w ⃗ ∈ R 2 \vector{u},\vector{v},\vector{w}\in\R^2 u , v , w ∈ R 2 and c , d ∈ R c,d\in\R c , d ∈ R .
Symmetry
⟨ u ⃗ , v ⃗ ⟩ = [ u 1 u 2 ] [ 1 2 2 5 ] [ v 1 v 2 ] = [ v 1 ( u 1 + 2 u 2 ) + v 2 ( 2 u 1 + 5 u 2 ) ] ⟨ v ⃗ , u ⃗ ⟩ = [ v 1 v 2 ] [ 1 2 2 5 ] [ u 1 u 2 ] = [ u 1 ( v 1 + 2 v 2 ) + u 2 ( 2 v 1 + 5 v 2 ) ] \begin{align*}
\langle\vector{u},\vector{v}\rangle &=
\begin{bmatrix}
u_1 & u_2
\end{bmatrix}\begin{bmatrix}
1 & 2 \\ 2 & 5
\end{bmatrix} \begin{bmatrix}
v_1 \\ v_2
\end{bmatrix} \\
&= \begin{bmatrix}v_1(u_1+2u_2) + v_2(2u_1+5u_2)\end{bmatrix}
\\
\langle\vector{v},\vector{u}\rangle &= \begin{bmatrix}
v_1 & v_2
\end{bmatrix}\begin{bmatrix}
1 & 2 \\ 2 & 5
\end{bmatrix} \begin{bmatrix}
u_1 \\ u_2
\end{bmatrix} \\
&= \begin{bmatrix}u_1(v_1+2v_2) + u_2(2v_1+5v_2)\end{bmatrix}
\end{align*} ⟨ u , v ⟩ ⟨ v , u ⟩ = [ u 1 u 2 ] [ 1 2 2 5 ] [ v 1 v 2 ] = [ v 1 ( u 1 + 2 u 2 ) + v 2 ( 2 u 1 + 5 u 2 ) ] = [ v 1 v 2 ] [ 1 2 2 5 ] [ u 1 u 2 ] = [ u 1 ( v 1 + 2 v 2 ) + u 2 ( 2 v 1 + 5 v 2 ) ]
And indeed,
v 1 ( u 1 + 2 u 2 ) + v 2 ( 2 u 1 + 5 u 2 ) = u 1 ( v 1 + 2 v 2 ) + u 2 ( 2 v 1 + 5 v 2 ) v_1(u_1+2u_2) + v_2(2u_1+5u_2) =
u_1(v_1+2v_2) + u_2(2v_1+5v_2) v 1 ( u 1 + 2 u 2 ) + v 2 ( 2 u 1 + 5 u 2 ) = u 1 ( v 1 + 2 v 2 ) + u 2 ( 2 v 1 + 5 v 2 )
if you expand each of the term. Hence, ⟨ u ⃗ , v ⃗ ⟩ = ⟨ v ⃗ , u ⃗ ⟩ \langle\vector{u},\vector{v}\rangle=\langle\vector{v},\vector{u}\rangle ⟨ u , v ⟩ = ⟨ v , u ⟩ .
Bilinearity
⟨ c u ⃗ , v ⃗ ⟩ = [ c u 1 c u 2 ] [ 1 2 2 5 ] [ v 1 v 2 ] = [ v 1 ( c u 1 + 2 c u 2 ) + v 2 ( 2 c u 1 + 5 c u 2 ) ] = c [ u 1 u 2 ] [ 1 2 2 5 ] [ v 1 v 2 ] = [ c v 1 ( u 1 + 2 u 2 ) + c v 2 ( 2 u 1 + 5 u 2 ) ] = c ⟨ u ⃗ , v ⃗ ⟩ = [ c ( v 1 ( u 1 + 2 u 2 ) + v 2 ( 2 u 1 + 5 u 2 ) ) ] \begin{align*}
\langle c\vector{u},\vector{v}\rangle
&= \begin{bmatrix}
cu_1 & cu_2
\end{bmatrix}\begin{bmatrix}
1 & 2 \\ 2 & 5
\end{bmatrix} \begin{bmatrix}
v_1 \\ v_2
\end{bmatrix}
&&= \begin{bmatrix}v_1(cu_1+2cu_2) + v_2(2cu_1+5cu_2)\end{bmatrix}
\\
&= c\begin{bmatrix}
u_1 & u_2
\end{bmatrix}\begin{bmatrix}
1 & 2 \\ 2 & 5
\end{bmatrix} \begin{bmatrix}
v_1 \\ v_2
\end{bmatrix}
&&= \begin{bmatrix}cv_1(u_1+2u_2) + cv_2(2u_1+5u_2)\end{bmatrix}
\\
&= c \langle \vector{u},\vector{v}\rangle
&&= \begin{bmatrix}c(v_1(u_1+2u_2) + v_2(2u_1+5u_2))\end{bmatrix}
\end{align*} ⟨ c u , v ⟩ = [ c u 1 c u 2 ] [ 1 2 2 5 ] [ v 1 v 2 ] = c [ u 1 u 2 ] [ 1 2 2 5 ] [ v 1 v 2 ] = c ⟨ u , v ⟩ = [ v 1 ( c u 1 + 2 c u 2 ) + v 2 ( 2 c u 1 + 5 c u 2 ) ] = [ c v 1 ( u 1 + 2 u 2 ) + c v 2 ( 2 u 1 + 5 u 2 ) ] = [ c ( v 1 ( u 1 + 2 u 2 ) + v 2 ( 2 u 1 + 5 u 2 )) ]
Hence, ⟨ c u ⃗ , v ⃗ ⟩ = c ⟨ u ⃗ , v ⃗ ⟩ \langle c\vector{u},\vector{v}\rangle
= c \langle \vector{u},\vector{v}\rangle ⟨ c u , v ⟩ = c ⟨ u , v ⟩ .
⟨ u ⃗ , v ⃗ ⟩ = [ u 1 u 2 ] [ 1 2 2 5 ] [ v 1 v 2 ] = [ v 1 ( u 1 + 2 u 2 ) + v 2 ( 2 u 1 + 5 u 2 ) ] ⟨ w ⃗ , v ⃗ ⟩ = [ w 1 w 2 ] [ 1 2 2 5 ] [ v 1 v 2 ] = [ v 1 ( w 1 + 2 w 2 ) + v 2 ( 2 w 1 + 5 w 2 ) ] ⟨ u ⃗ + w ⃗ , v ⃗ ⟩ = [ u 1 + w 1 u 2 + w 2 ] [ 1 2 2 5 ] [ v 1 v 2 ] = [ v 1 ( u 1 + w 1 + 2 ( u 2 + w 2 ) ) + v 2 ( 2 ( u 1 + w 1 ) + 5 ( u 2 + w 2 ) ) ] \begin{align*}
\langle\vector{u},\vector{v}\rangle &= \begin{bmatrix}
u_1 & u_2
\end{bmatrix}\begin{bmatrix}
1 & 2 \\ 2 & 5
\end{bmatrix} \begin{bmatrix}
v_1 \\ v_2
\end{bmatrix} \\
&= \begin{bmatrix}v_1(u_1+2u_2) + v_2(2u_1+5u_2)\end{bmatrix}
\\
\langle\vector{w},\vector{v}\rangle &= \begin{bmatrix}
w_1 & w_2
\end{bmatrix}\begin{bmatrix}
1 & 2 \\ 2 & 5
\end{bmatrix} \begin{bmatrix}
v_1 \\ v_2
\end{bmatrix} \\
&= \begin{bmatrix}v_1(w_1+2w_2) + v_2(2w_1+5w_2)\end{bmatrix}
\\
\langle \vector{u} + \vector{w}, \vector{v}\rangle
&= \begin{bmatrix}
u_1+w_1 & u_2+w_2
\end{bmatrix}\begin{bmatrix}
1 & 2 \\ 2 & 5
\end{bmatrix} \begin{bmatrix}
v_1 \\ v_2
\end{bmatrix} \\
&= \begin{bmatrix}v_1(u_1+w_1+2(u_2+w_2)) + v_2(2(u_1+w_1) + 5(u_2+w_2))\end{bmatrix}
\end{align*} ⟨ u , v ⟩ ⟨ w , v ⟩ ⟨ u + w , v ⟩ = [ u 1 u 2 ] [ 1 2 2 5 ] [ v 1 v 2 ] = [ v 1 ( u 1 + 2 u 2 ) + v 2 ( 2 u 1 + 5 u 2 ) ] = [ w 1 w 2 ] [ 1 2 2 5 ] [ v 1 v 2 ] = [ v 1 ( w 1 + 2 w 2 ) + v 2 ( 2 w 1 + 5 w 2 ) ] = [ u 1 + w 1 u 2 + w 2 ] [ 1 2 2 5 ] [ v 1 v 2 ] = [ v 1 ( u 1 + w 1 + 2 ( u 2 + w 2 )) + v 2 ( 2 ( u 1 + w 1 ) + 5 ( u 2 + w 2 )) ]
By inspection, combining the first term in ⟨ u ⃗ , v ⃗ ⟩ \langle\vector{u},\vector{v}\rangle ⟨ u , v ⟩ and ⟨ w ⃗ , v ⃗ ⟩ \langle\vector{w},\vector{v}\rangle ⟨ w , v ⟩ together produces the first term in ⟨ u ⃗ + w ⃗ , v ⃗ ⟩ \langle\vector{u}+\vector{w},\vector{v}\rangle ⟨ u + w , v ⟩ . And the same applies for the second term.
⟨ u ⃗ , v ⃗ ⟩ + ⟨ w ⃗ , v ⃗ ⟩ = [ v 1 ( u 1 + 2 u 2 ) + v 2 ( 2 u 1 + 5 u 2 ) + v 1 ( w 1 + 2 w 2 ) + v 2 ( 2 w 1 + 5 w 2 ) ] = [ v 1 ( u 1 + 2 u 2 ) + v 1 ( w 1 + 2 w 2 ) + v 2 ( 2 u 1 + 5 u 2 ) + v 2 ( 2 w 1 + 5 w 2 ) ] = [ v 1 ( u 1 + w 1 + 2 ( u 2 + w 2 ) ) + v 2 ( 2 ( u 1 + w 1 ) + 5 ( u 2 + w 2 ) ) ] = ⟨ u ⃗ + w ⃗ , v ⃗ ⟩ \begin{align*}
\langle\vector{u},\vector{v}\rangle
+ \langle\vector{w},\vector{v}\rangle
&= &&\big[ v_1(u_1+2u_2) + v_2(2u_1+5u_2) + v_1(w_1+2w_2) + v_2(2w_1+5w_2) &\big] \\
&= &&\big[ v_1(u_1+2u_2) + v_1(w_1+2w_2) + v_2(2u_1+5u_2) + v_2(2w_1+5w_2) &\big] \\
&= &&\big[ v_1(u_1+w_1+2(u_2+w_2)) + v_2(2(u_1+w_1) + 5(u_2+w_2)) &\big] \\
&= &&\langle \vector{u} + \vector{w}, \vector{v}\rangle
\end{align*} ⟨ u , v ⟩ + ⟨ w , v ⟩ = = = = [ v 1 ( u 1 + 2 u 2 ) + v 2 ( 2 u 1 + 5 u 2 ) + v 1 ( w 1 + 2 w 2 ) + v 2 ( 2 w 1 + 5 w 2 ) [ v 1 ( u 1 + 2 u 2 ) + v 1 ( w 1 + 2 w 2 ) + v 2 ( 2 u 1 + 5 u 2 ) + v 2 ( 2 w 1 + 5 w 2 ) [ v 1 ( u 1 + w 1 + 2 ( u 2 + w 2 )) + v 2 ( 2 ( u 1 + w 1 ) + 5 ( u 2 + w 2 )) ⟨ u + w , v ⟩ ] ] ]
Hence, ⟨ u ⃗ , v ⃗ ⟩ + ⟨ w ⃗ , v ⃗ ⟩ = ⟨ u ⃗ + w ⃗ , v ⃗ ⟩ \langle\vector{u},\vector{v}\rangle + \langle\vector{w},\vector{v}\rangle=\langle \vector{u} + \vector{w}, \vector{v}\rangle ⟨ u , v ⟩ + ⟨ w , v ⟩ = ⟨ u + w , v ⟩ . As such, this inner product satisfies bilinearity.
Positive-definite
⟨ u ⃗ , u ⃗ ⟩ = [ u 1 u 2 ] [ 1 2 2 5 ] [ u 1 u 2 ] = [ u 1 ( u 1 + 2 u 2 ) + u 2 ( 2 u 1 + 5 u 2 ) ] = [ u 1 2 + 5 u 2 2 + 4 u 1 u 2 ] \begin{align*}
\langle\vector{u},\vector{u}\rangle &= \begin{bmatrix}
u_1 & u_2
\end{bmatrix}\begin{bmatrix}
1 & 2 \\ 2 & 5
\end{bmatrix} \begin{bmatrix}
u_1 \\ u_2
\end{bmatrix} \\
&= \begin{bmatrix}u_1(u_1+2u_2) + u_2(2u_1+5u_2)\end{bmatrix} \\
&= \begin{bmatrix}u_1^2 + 5u_2^2 + 4u_1u_2\end{bmatrix}
\end{align*} ⟨ u , u ⟩ = [ u 1 u 2 ] [ 1 2 2 5 ] [ u 1 u 2 ] = [ u 1 ( u 1 + 2 u 2 ) + u 2 ( 2 u 1 + 5 u 2 ) ] = [ u 1 2 + 5 u 2 2 + 4 u 1 u 2 ]
Notice that ⟨ u ⃗ , u ⃗ ⟩ \langle\vector{u},\vector{u}\rangle ⟨ u , u ⟩ is positive for all real u 1 , u 2 u_1, u_2 u 1 , u 2 and that ⟨ u ⃗ , u ⃗ ⟩ = 0 ⟺ u 1 = u 2 = 0 \langle\vector{u},\vector{u}\rangle = 0 \iff u_1 = u_2 = 0 ⟨ u , u ⟩ = 0 ⟺ u 1 = u 2 = 0 . As such, ⟨ u ⃗ , u ⃗ ⟩ = 0 ⟺ u ⃗ = 0 ⃗ \langle\vector{u},\vector{u}\rangle=0 \iff \vector{u}=\vector{0} ⟨ u , u ⟩ = 0 ⟺ u = 0 and ⟨ u ⃗ , u ⃗ ⟩ ≥ 0 \langle\vector{u},\vector{u}\rangle\ge 0 ⟨ u , u ⟩ ≥ 0 for all u ⃗ ∈ R 2 \vector{u}\in\R^2 u ∈ R 2 .
Thus, satisfying the properties of an inner product space.
(b) Using Gram-Schmidt process, starting with the vectors [ 1 0 ] \begin{bmatrix} 1 \\ 0\end{bmatrix} [ 1 0 ] and [ 0 1 ] \begin{bmatrix} 0 \\ 1\end{bmatrix} [ 0 1 ] , find an orthogonal basis under this inner product.
For this part, we will interpret the output of the inner product (the 1 × 1 1\times1 1 × 1 matrix) as a scalar. Otherwise, we will not be able to define an orthogonal basis because division is not generally defined as a matrix operation.
Let v ⃗ 1 = e ⃗ 1 = [ 1 0 ] \vector{v}_1 = \vector{e}_1 = \begin{bmatrix} 1 \\ 0\end{bmatrix} v 1 = e 1 = [ 1 0 ] and e ⃗ 2 = [ 0 1 ] \vector{e}_2 = \begin{bmatrix} 0 \\ 1\end{bmatrix} e 2 = [ 0 1 ] . Then,
v ⃗ 2 = u ⃗ 2 − proj v ⃗ 1 ( e ⃗ 2 ) = e ⃗ 2 − ⟨ e ⃗ 2 , v ⃗ 1 ⟩ ∣ ∣ v ⃗ 1 ∣ ∣ 2 v ⃗ 1 = [ 0 1 ] − ⟨ [ 0 1 ] , [ 1 0 ] ⟩ ∣ ∣ [ 1 0 ] ∣ ∣ 2 [ 1 0 ] = [ 0 1 ] − [ 0 1 ] [ 1 2 2 5 ] [ 1 0 ] [ 1 0 ] [ 1 2 2 5 ] [ 1 0 ] [ 1 0 ] = [ 0 1 ] − 2 1 [ 1 0 ] = [ − 2 1 ] \def<{\left\langle} \def>{\right\rangle}
\def\norm#1{\left|\left|#1\right|\right|}
\begin{align*}
\vector{v}_2 &= \vector{u}_2 - \operatorname{proj}_{\vector{v}_1}(\vector{e}_2) \\
&= \vector{e}_2
- \frac{<\vector{e}_2,\vector{v}_1>}{||\vector{v}_1||^2}\vector{v}_1 \\
&= \begin{bmatrix} 0 \\ 1\end{bmatrix}
- \frac{<\begin{bmatrix} 0 \\ 1\end{bmatrix},\begin{bmatrix} 1 \\ 0\end{bmatrix}>}{\norm{\begin{bmatrix} 1 \\ 0\end{bmatrix}}^2}\begin{bmatrix} 1 \\ 0\end{bmatrix} \\
&= \begin{bmatrix} 0 \\ 1\end{bmatrix}
- \frac{
\begin{bmatrix}
0 & 1
\end{bmatrix}\begin{bmatrix}
1 & 2 \\ 2 & 5
\end{bmatrix} \begin{bmatrix}
1 \\ 0
\end{bmatrix}
}{
\begin{bmatrix}
1 & 0
\end{bmatrix}\begin{bmatrix}
1 & 2 \\ 2 & 5
\end{bmatrix} \begin{bmatrix}
1 \\ 0
\end{bmatrix}
}\begin{bmatrix} 1 \\ 0\end{bmatrix} \\
&= \begin{bmatrix} 0 \\ 1\end{bmatrix}
- \frac{2}{1}\begin{bmatrix} 1 \\ 0\end{bmatrix} \\
&= \begin{bmatrix}
-2 \\ 1
\end{bmatrix}
\end{align*} v 2 = u 2 − proj v 1 ( e 2 ) = e 2 − ∣∣ v 1 ∣ ∣ 2 ⟨ e 2 , v 1 ⟩ v 1 = [ 0 1 ] − [ 1 0 ] 2 ⟨ [ 0 1 ] , [ 1 0 ] ⟩ [ 1 0 ] = [ 0 1 ] − [ 1 0 ] [ 1 2 2 5 ] [ 1 0 ] [ 0 1 ] [ 1 2 2 5 ] [ 1 0 ] [ 1 0 ] = [ 0 1 ] − 1 2 [ 1 0 ] = [ − 2 1 ]
And so, an orthogonal basis under this inner product is { [ 1 0 ] , [ − 2 1 ] } \Set{
\begin{bmatrix} 1 \\ 0 \end{bmatrix},
\begin{bmatrix} -2 \\ 1\end{bmatrix}
} { [ 1 0 ] , [ − 2 1 ] } .
9. Consider the space of all continuous functions on [ 0 , 1 ] [0, 1] [ 0 , 1 ] , C [ 0 , 1 ] C[0, 1] C [ 0 , 1 ] with the standard inner product. ⟨ f , g ⟩ = ∫ 0 1 f ( x ) g ( x ) d x \langle f, g\rangle = \int_0^1 f(x)g(x)\d x ⟨ f , g ⟩ = ∫ 0 1 f ( x ) g ( x ) d x
(a) Use Gram-Schmidt process. Find an orthogonal basis using span { x 2 , x , 1 } \operatorname{span}\Set{x^2, x, 1} span { x 2 , x , 1 } .
Let U 1 = x 2 U_1=x^2 U 1 = x 2 , U 2 = x U_2=x U 2 = x , and U 3 = 1 U_3=1 U 3 = 1 .
Take V 1 = U 1 = x 2 V_1 = U_1 = x^2 V 1 = U 1 = x 2 . Then,
V 2 = U 2 − proj V 1 ( U 2 ) = U 2 − ⟨ U 2 , V 1 ⟩ ∣ ∣ V 1 ∣ ∣ 2 V 1 = x − ⟨ x , x 2 ⟩ ∣ ∣ x 2 ∣ ∣ 2 x 2 = x − ∫ 0 1 x ⋅ x 2 d x ∫ 0 1 x 2 ⋅ x 2 d x x 2 = x − 5 4 x 2 V 3 = U 3 − proj V 1 ( U 3 ) − proj V 2 ( U 3 ) = U 3 − ⟨ U 3 , V 1 ⟩ ∣ ∣ V 1 ∣ ∣ 2 V 1 − ⟨ U 3 , V 2 ⟩ ∣ ∣ V 2 ∣ ∣ 2 V 2 = 1 − ⟨ 1 , x 2 ⟩ ∣ ∣ x 2 ∣ ∣ 2 x 2 − ⟨ 1 , x − 5 4 x 2 ⟩ ∣ ∣ x − 5 4 x 2 ∣ ∣ 2 ( x − 5 4 x 2 ) = 1 − ∫ 0 1 1 ⋅ x 2 d x ∫ 0 1 ( x 2 ) 2 d x x 2 − ∫ 0 1 1 ⋅ ( x − 5 4 x 2 ) d x ∫ 0 1 ( x − 5 4 x 2 ) 2 d x ( x − 5 4 x 2 ) = 10 x 2 − 12 x 3 + 1 \def<{\left\langle} \def>{\right\rangle}
\def\norm#1{\left|\left|#1\right|\right|}
\begin{align*}
\href{https://www.wolframalpha.com/input?i=x+-+%5Cfrac%7B%5Cint_0%5E1+x%5Ccdot+x%5E2+dx%7D%7B%5Cint_0%5E1+x%5E2%5Ccdot+x%5E2+dx%7Dx%5E2+}
{V_2}
&= U_2 - \operatorname{proj}_{V_1}(U_2) \\
&= U_2 - \frac{<U_2,V_1>}{||V_1||^2}V_1 \\
&= x - \frac{<x,x^2>}{||x^2||^2}x^2 \\
&= x - \frac{\int_0^1 x\cdot x^2 \d x}{\int_0^1 x^2\cdot x^2 \d x}x^2 \\
&= x - \frac{5}{4}x^2
\\[2em]
\href{https://www.wolframalpha.com/input?i=1%5C%3A-%5C%3A%5Cfrac%7B%5Cint+_0%5E1%5C%3A1%5Ccdot+%5C%3Ax%5E2dx%7D%7B%5Cint+_0%5E1%5C%3Ax%5E2%5Ccdot+%5C%3Ax%5E2%5C%3Adx%7Dx%5E2%5C%3A-%5Cfrac%7B%5Cint+_0%5E1%5C%3A1%5Ccdot+%5Cleft%28x-%5Cfrac%7B5%7D%7B4%7Dx%5E2%5Cright%29dx%7D%7B%5Cint+_0%5E1%5C%3A%5Cleft%28x-%5Cfrac%7B5%7D%7B4%7Dx%5E2%5Cright%29%5Cleft%28x-%5Cfrac%7B5%7D%7B4%7Dx%5E2%5Cright%29%5C%3Adx%7D%5Cleft%28x-%5Cfrac%7B5%7D%7B4%7Dx%5E2%5Cright%29}
{V_3}
&= U_3 - \operatorname{proj}_{V_1}(U_3) - \operatorname{proj}_{V_2}(U_3) \\
&= U_3 - \frac{<U_3,V_1>}{||V_1||^2}V_1 - \frac{<U_3,V_2>}{||V_2||^2}V_2 \\
&= 1 - \frac{<1,x^2>}{||x^2||^2}x^2 - \frac{<1,x-\frac{5}{4}x^2>}{||x-\frac{5}{4}x^2||^2}\(x-\frac{5}{4}x^2\) \\
&= 1 - \frac{\int_0^1 1\cdot x^2\d x}{\int_0^1 (x^2)^2 \d x}x^2 -
\frac{\int_0^1 1\cdot (x-\frac{5}{4}x^2)\d x}{\int_0^1 (x-\frac{5}{4}x^2)^2 \d x}\(x-\frac{5}{4}x^2\) \\
&= \frac{10x^2 - 12x}{3} + 1 \\
\end{align*} V 2 V 3 = U 2 − proj V 1 ( U 2 ) = U 2 − ∣∣ V 1 ∣ ∣ 2 ⟨ U 2 , V 1 ⟩ V 1 = x − ∣∣ x 2 ∣ ∣ 2 ⟨ x , x 2 ⟩ x 2 = x − ∫ 0 1 x 2 ⋅ x 2 d x ∫ 0 1 x ⋅ x 2 d x x 2 = x − 4 5 x 2 = U 3 − proj V 1 ( U 3 ) − proj V 2 ( U 3 ) = U 3 − ∣∣ V 1 ∣ ∣ 2 ⟨ U 3 , V 1 ⟩ V 1 − ∣∣ V 2 ∣ ∣ 2 ⟨ U 3 , V 2 ⟩ V 2 = 1 − ∣∣ x 2 ∣ ∣ 2 ⟨ 1 , x 2 ⟩ x 2 − ∣∣ x − 4 5 x 2 ∣ ∣ 2 ⟨ 1 , x − 4 5 x 2 ⟩ ( x − 4 5 x 2 ) = 1 − ∫ 0 1 ( x 2 ) 2 d x ∫ 0 1 1 ⋅ x 2 d x x 2 − ∫ 0 1 ( x − 4 5 x 2 ) 2 d x ∫ 0 1 1 ⋅ ( x − 4 5 x 2 ) d x ( x − 4 5 x 2 ) = 3 10 x 2 − 12 x + 1
And so, we have that { x 2 , x − 5 4 x 2 , 10 x 2 − 12 x 3 + 1 } \displaystyle\Set{
x^2, x - \frac{5}{4}x^2, \frac{10x^2 - 12x}{3} + 1
} { x 2 , x − 4 5 x 2 , 3 10 x 2 − 12 x + 1 } is an orthogonal basis of this inner product space.
Let W ⊂ C [ 0 , 1 ] W\sub C[0,1] W ⊂ C [ 0 , 1 ] be a subspace where { 1 , sin ( 2 π x ) , sin ( 2 π 2 x ) } \set{1,\sin(2\pi x),\sin(2\pi2x)} { 1 , sin ( 2 π x ) , sin ( 2 π 2 x ) } is an orthonormal basis of W W W , as shown in the previous homework . Then, the orthogonal projection of x 2 x^2 x 2 on to W W W is given by:
proj W ( x 2 ) = ⟨ x 2 , 1 ⟩ ⟨ 1 , 1 ⟩ 1 + ⟨ x 2 , sin ( 2 π x ) ⟩ ⟨ sin ( 2 π x ) , sin ( 2 π x ) ⟩ sin ( 2 π x ) + ⟨ x 2 , sin ( 2 π 2 x ) ⟩ ⟨ sin ( 2 π 2 x ) , sin ( 2 π 2 x ) ⟩ sin ( 2 π 2 x ) = ∫ 0 1 x 2 ⋅ 1 d x ∫ 0 1 1 2 d x 1 + ∫ 0 1 x 2 ⋅ sin ( 2 π x ) d x ∫ 0 1 sin 2 ( 2 π x ) d x sin ( 2 π x ) + ∫ 0 1 x 2 ⋅ sin ( 2 π 2 x ) d x ∫ 0 1 sin 2 ( 2 π 2 x ) d x sin ( 2 π 2 x ) = 1 3 − sin ( 2 π x ) π − sin ( 4 π x ) 2 π \def<{\left\langle} \def>{\right\rangle}
\def\norm#1{\left|\left|#1\right|\right|}
\begin{align*}
\operatorname{proj}_W(x^2)
&= \frac{<x^2, 1>}{<1,1>}1
+ \frac{<x^2, \sin(2\pi x)>}{<\sin(2\pi x), \sin(2\pi x)>}\sin(2\pi x)
+ \frac{<x^2, \sin(2\pi2x)>}{<\sin(2\pi2x),\sin(2\pi2x)>}\sin(2\pi2x) \\
&= \frac{\int_0^1 x^2\cdot 1\d x}{\int_0^1 1^2 \d x}1
+ \href{https://www.wolframalpha.com/input?i=%5Cfrac%7B%5Cint+_0%5E1%5C%3Ax%5E2%5Ccdot+%5Csin+%5Cleft%282%5Cpi+%5C%3Ax%5Cright%29%5C%3Adx%7D%7B%5Cint+_0%5E1%5C%3A%5Csin+%5E2%5Cleft%282%5Cpi+%5C%3Ax%5Cright%29%5C%3Adx%7D%5Csin+%5Cleft%282%5Cpi+%5C%3Ax%5Cright%29}
{\frac{\int_0^1 x^2\cdot \sin(2\pi x) \d x}{\int_0^1 \sin^2(2\pi x) \d x}\sin(2\pi x)}
+ \href{https://www.wolframalpha.com/input?i=%5Cfrac%7B%5Cint+_0%5E1%5C%3Ax%5E2%5Ccdot+%5Csin+%5Cleft%282%5Cpi+2x%5Cright%29%5C%3Adx%7D%7B%5Cint+_0%5E1%5C%3A%5Csin+%5E2%5Cleft%282%5Cpi+2x%5Cright%29dx%7D%5Csin+%5Cleft%282%5Cpi+2x%5Cright%29}
{\frac{\int_0^1 x^2\cdot\sin(2\pi2x) \d x}{\int_0^1 \sin^2(2\pi2x)\d x}\sin(2\pi2x)} \\
&= \frac{1}{3} - \frac{\sin(2\pi x)}{\pi} - \frac{\sin(4\pi x)}{2\pi}
\end{align*} proj W ( x 2 ) = ⟨ 1 , 1 ⟩ ⟨ x 2 , 1 ⟩ 1 + ⟨ sin ( 2 π x ) , sin ( 2 π x ) ⟩ ⟨ x 2 , sin ( 2 π x ) ⟩ sin ( 2 π x ) + ⟨ sin ( 2 π 2 x ) , sin ( 2 π 2 x ) ⟩ ⟨ x 2 , sin ( 2 π 2 x ) ⟩ sin ( 2 π 2 x ) = ∫ 0 1 1 2 d x ∫ 0 1 x 2 ⋅ 1 d x 1 + ∫ 0 1 sin 2 ( 2 π x ) d x ∫ 0 1 x 2 ⋅ sin ( 2 π x ) d x sin ( 2 π x ) + ∫ 0 1 sin 2 ( 2 π 2 x ) d x ∫ 0 1 x 2 ⋅ sin ( 2 π 2 x ) d x sin ( 2 π 2 x ) = 3 1 − π sin ( 2 π x ) − 2 π sin ( 4 π x )