Week 9
Determinant
Recall, for a 2 × 2 2\times2 2 × 2 matrix A = ( a b c d ) A=\begin{pmatrix}
a & b \\ c & d
\end{pmatrix} A = ( a c b d ) , the determinant of A A A is:
det A = det ( a b c d ) = a d − b c \det A = \det \begin{pmatrix}
a & b \\ c & d
\end{pmatrix} = ad - bc det A = det ( a c b d ) = a d − b c
And that A − 1 A^{-1} A − 1 exists if and only if det A ≠ 0 \det A\neq 0 det A = 0 (because the scalar coefficient is 1 det A = 1 a d − b c \displaystyle\frac{1}{\det A}=\frac{1}{ad-bc} det A 1 = a d − b c 1 ).
And recall that to compute eigenvalues for a vector, we needed to find values of λ \lambda λ such that A v ⃗ = λ v ⃗ A\vector{v}=\lambda\vector{v} A v = λ v (where v ⃗ ≠ 0 ⃗ \vector{v}\neq\vector{0} v = 0 ).
λ \lambda λ is an eigenvalue if there exists v ⃗ ≠ 0 ⃗ \vector{v}\neq\vector{0} v = 0 such that A v ⃗ = λ v ⃗ A\vector{v} = \lambda\vector{v} A v = λ v … which is also equivalent to the following:
( A − λ I ) v ⃗ = 0 ⃗ (A-\lambda I)\vector{v}=\vector{0} ( A − λ I ) v = 0 for some v ⃗ ≠ 0 ⃗ \vector{v}\neq\vector{0} v = 0
ker ( A − λ I ) ≠ { 0 ⃗ } \ker(A-\lambda I)\neq\set{\vector{0}} ker ( A − λ I ) = { 0 }
( A − λ I ) (A-\lambda I) ( A − λ I ) is not invertible
det ( A − λ I ) = 0 \det(A-\lambda I)=0 det ( A − λ I ) = 0
So, the goal here is to find det A \det A det A for some n × n n\times n n × n matrix A A A .
The determinant is a function, usually denoted det \det det ,
det : R n × ⋯ × R n ⏟ n copies → R \det: \underbrace{\R^n \times\cdots\times\R^n}_{n\text{ copies}} \to \R det : n copies R n × ⋯ × R n → R
satisfying the following properties:
Property 1: Multilinearity
det ( u ⃗ 1 + v ⃗ 1 v ⃗ 2 ⋮ v ⃗ n ) = det ( u ⃗ 1 v ⃗ 2 ⋮ v ⃗ n ) + det ( v ⃗ 1 v ⃗ 2 ⋮ v ⃗ n ) det ( α v ⃗ 1 v ⃗ 2 ⋮ v ⃗ n ) = α det ( v ⃗ 1 v ⃗ 2 ⋮ v ⃗ n ) \det \begin{pmatrix}
\vector{u}_1 + \vector{v}_1 \\
\vector{v}_2 \\
\vdots \\
\vector{v}_n
\end{pmatrix} = \det \begin{pmatrix}
\vector{u}_1 \\
\vector{v}_2 \\
\vdots \\
\vector{v}_n
\end{pmatrix} + \det \begin{pmatrix}
\vector{v}_1 \\
\vector{v}_2 \\
\vdots \\
\vector{v}_n
\end{pmatrix} \\
\det \begin{pmatrix}
\alpha\vector{v}_1 \\
\vector{v}_2 \\
\vdots \\
\vector{v}_n
\end{pmatrix} = \alpha\det \begin{pmatrix}
\vector{v}_1 \\
\vector{v}_2 \\
\vdots \\
\vector{v}_n
\end{pmatrix} det u 1 + v 1 v 2 ⋮ v n = det u 1 v 2 ⋮ v n + det v 1 v 2 ⋮ v n det α v 1 v 2 ⋮ v n = α det v 1 v 2 ⋮ v n
Property 2: Antisymmetry
Swapping any two rows will flip the sign of the determinant.
det ( v ⃗ 1 ⋮ v ⃗ 2 ) = − det ( v ⃗ 2 ⋮ v ⃗ 1 ) \det \begin{pmatrix}
\vector{v}_1 \\
\vdots \\
\vector{v}_2
\end{pmatrix} = -\det \begin{pmatrix}
\vector{v}_2 \\
\vdots \\
\vector{v}_1
\end{pmatrix} det v 1 ⋮ v 2 = − det v 2 ⋮ v 1
Property 3: And this shit (doesn’t have a name, just remember it)
The determinant of the identity matrix is one.
det ( e ⃗ 1 ⋮ e ⃗ n ) = det I n = 1 \det \begin{pmatrix}
\vector{e}_1 \\
\vdots \\
\vector{e}_n
\end{pmatrix} =\det I_n = 1 det e 1 ⋮ e n = det I n = 1
Notice that by properties 1 and 2, det \det det is a function that is linear in all rows.
Multilinearity
Take a unit cube, composed of three standard bases: e ⃗ 1 , e ⃗ 2 , e ⃗ 3 \vector{e}_1, \vector{e}_2, \vector{e}_3 e 1 , e 2 , e 3 .
We know that the volume of the unit cube is just length × width × height \text{length}\times\text{width}\times\text{height} length × width × height .
Or:
1 × 1 × 1 1 \times 1 \times 1 1 × 1 × 1
In other words, the volume V V V , is:
V = det ( e ⃗ 1 e ⃗ 2 e ⃗ 3 ) = 1. V = \det\begin{pmatrix}
\vector{e}_1 \\ \vector{e}_2 \\ \vector{e}_3
\end{pmatrix} = 1. V = det e 1 e 2 e 3 = 1.
Then, if one of the side is of length 2 2 2 e.g., take 2 e ⃗ 2 2\vector{e}_2 2 e 2 . Then, the volume is now
2 × 1 × 1 = 2. 2\times1\times1 = 2. 2 × 1 × 1 = 2.
Or:
V = det ( 2 e ⃗ 1 e ⃗ 2 e ⃗ 3 ) = 2 det ( e ⃗ 1 e ⃗ 2 e ⃗ 3 ) = 2. V = \det\begin{pmatrix}
2\vector{e}_1 \\ \vector{e}_2 \\ \vector{e}_3
\end{pmatrix} = 2\det\begin{pmatrix}
\vector{e}_1 \\ \vector{e}_2 \\ \vector{e}_3
\end{pmatrix} = 2. V = det 2 e 1 e 2 e 3 = 2 det e 1 e 2 e 3 = 2.
Then, for sides a , b , c a,b,c a , b , c , the volume of a given cube can be written as
V = det ( a e ⃗ 1 b e ⃗ 2 c e ⃗ 3 ) . V = \det\begin{pmatrix}
a\vector{e}_1 \\ b\vector{e}_2 \\ c\vector{e}_3
\end{pmatrix}. V = det a e 1 b e 2 c e 3 .
And by linearity (and that det I = 1 \det I = 1 det I = 1 ),
V = det ( a e ⃗ 1 b e ⃗ 2 c e ⃗ 3 ) = a b c ⋅ det ( e ⃗ 1 e ⃗ 2 e ⃗ 3 ) = a b c ( 1 ) = a b c . V = \det\begin{pmatrix}
a\vector{e}_1 \\ b\vector{e}_2 \\ c\vector{e}_3
\end{pmatrix} = abc\cdot\det\begin{pmatrix}
\vector{e}_1 \\ \vector{e}_2 \\ \vector{e}_3
\end{pmatrix} = abc (1) = abc. V = det a e 1 b e 2 c e 3 = ab c ⋅ det e 1 e 2 e 3 = ab c ( 1 ) = ab c .
Also note that
det ( a e ⃗ 1 b e ⃗ 2 c e ⃗ 3 ) = det ( a 0 0 0 b 0 0 0 c ) = a b c \det\begin{pmatrix}
a\vector{e}_1 \\ b\vector{e}_2 \\ c\vector{e}_3
\end{pmatrix} = \det\begin{pmatrix}
a & 0 & 0 \\
0 & b & 0 \\
0 & 0 & c
\end{pmatrix} = abc det a e 1 b e 2 c e 3 = det a 0 0 0 b 0 0 0 c = ab c
And so, we can surmise for any diagonal matrices, we have:
Property 4: Diagonal matrices
det ( λ 1 0 ⋱ 0 λ n ) = λ 1 λ 2 ⋯ λ n . \det\begin{pmatrix}
\lambda_1 & & 0 \\
& \ddots & \\
0 & & \lambda_n
\end{pmatrix} = \lambda_1\lambda_2\cdots\lambda_n. det λ 1 0 ⋱ 0 λ n = λ 1 λ 2 ⋯ λ n .
Antisymmetry
Again, consider a unit cube in R 3 \R^3 R 3 . We now know that the determinant of the three basis vectors produces the volume, such as:
V = det ( e ⃗ 1 e ⃗ 2 e ⃗ 3 ) = 1. V = \det\begin{pmatrix}
\vector{e}_1 \\ \vector{e}_2 \\ \vector{e}_3
\end{pmatrix} = 1. V = det e 1 e 2 e 3 = 1.
We also know that for a cube with sides of length one, its volume is always one. So, of course the order of the sides shouldn’t matter. However, by antisymmetry, switching two rows would mean:
V = − det ( e ⃗ 2 e ⃗ 1 e ⃗ 3 ) = − 1. V = -\det\begin{pmatrix}
\vector{e}_2 \\ \vector{e}_1 \\ \vector{e}_3
\end{pmatrix} = -1. V = − det e 2 e 1 e 3 = − 1.
The negative sign actually just indicates the orientation of the unit cube. Strictly speaking, the volume of the cube is actually the absolute value of the determinant:
V = ∣ det ( e ⃗ 1 e ⃗ 2 e ⃗ 3 ) ∣ V = \left|\det\begin{pmatrix}
\vector{e}_1 \\ \vector{e}_2 \\ \vector{e}_3
\end{pmatrix}\right| V = det e 1 e 2 e 3
And so, switching the rows in any order would not change the volume of the cube. In R 3 \R^3 R 3 , there are 3 ! = 6 3! = 6 3 ! = 6 combinations you can switch. Namely:
det ( e ⃗ 1 e ⃗ 2 e ⃗ 3 ) = det ( e ⃗ 3 e ⃗ 1 e ⃗ 2 ) = det ( e ⃗ 2 e ⃗ 3 e ⃗ 1 ) = 1 det ( e ⃗ 2 e ⃗ 1 e ⃗ 3 ) = det ( e ⃗ 3 e ⃗ 2 e ⃗ 1 ) = det ( e ⃗ 1 e ⃗ 3 e ⃗ 2 ) = − 1 \det\begin{pmatrix}
\vector{e}_1 \\ \vector{e}_2 \\ \vector{e}_3
\end{pmatrix} =
\det\begin{pmatrix}
\vector{e}_3 \\ \vector{e}_1 \\ \vector{e}_2
\end{pmatrix} =
\det\begin{pmatrix}
\vector{e}_2 \\ \vector{e}_3 \\ \vector{e}_1
\end{pmatrix} = 1
\\
\det\begin{pmatrix}
\vector{e}_2 \\ \vector{e}_1 \\ \vector{e}_3
\end{pmatrix} =
\det\begin{pmatrix}
\vector{e}_3 \\ \vector{e}_2 \\ \vector{e}_1
\end{pmatrix} =
\det\begin{pmatrix}
\vector{e}_1 \\ \vector{e}_3 \\ \vector{e}_2
\end{pmatrix} = -1 det e 1 e 2 e 3 = det e 3 e 1 e 2 = det e 2 e 3 e 1 = 1 det e 2 e 1 e 3 = det e 3 e 2 e 1 = det e 1 e 3 e 2 = − 1
Permutation matrix
The permutation matrix is an n × n n\times n n × n matrix obtained by permuting the standard basis vector, e ⃗ i \vector{e}_i e i . By properties 1 and 2, we have that
det P = ( − 1 ) σ \det P = (-1)^\sigma det P = ( − 1 ) σ
where σ \sigma σ is the number of flips required to turn P P P back into the identity matrix, I n I_n I n .
And so, for an n × n n\times n n × n matrix, there exists n ! n! n ! permuntation matrices.
Now consider a case if we have two identical rows. Then, by this property, switching them would mean that:
det ( v ⃗ ⋮ v ⃗ ) = − det ( v ⃗ ⋮ v ⃗ ) \det\begin{pmatrix}
\vector{v} \\ \vdots \\ \vector{v}
\end{pmatrix} =
-\det\begin{pmatrix}
\vector{v} \\ \vdots \\ \vector{v}
\end{pmatrix} det v ⋮ v = − det v ⋮ v
Remember that the determinant is just a number. So, the above is only true if and only if the determinant is zero.
det ( v ⃗ ⋮ v ⃗ ) = − det ( v ⃗ ⋮ v ⃗ ) ⟺ det ( v ⃗ ⋮ v ⃗ ) = 0 \det\begin{pmatrix}
\vector{v} \\ \vdots \\ \vector{v}
\end{pmatrix} =
-\det\begin{pmatrix}
\vector{v} \\ \vdots \\ \vector{v}
\end{pmatrix} \iff
\det\begin{pmatrix}
\vector{v} \\ \vdots \\ \vector{v}
\end{pmatrix} = 0 det v ⋮ v = − det v ⋮ v ⟺ det v ⋮ v = 0
Property 5: Identical rows
det ( v ⃗ ⋮ v ⃗ ) = 0 \det\begin{pmatrix}
\vector{v} \\ \vdots \\ \vector{v}
\end{pmatrix} = 0 det v ⋮ v = 0
Show that
det ( 1 2 3 4 5 5 6 7 8 9 10 11 12 13 14 2 4 6 8 10 1 1 1 1 1 ) = 0. \det\begin{pmatrix}
1 & 2 & 3 & 4 & 5 \\
5 & 6 & 7 & 8 & 9 \\
10 & 11 & 12 & 13 & 14 \\
2 & 4 & 6 & 8 & 10 \\
1 & 1 & 1 & 1 & 1
\end{pmatrix} = 0. det 1 5 10 2 1 2 6 11 4 1 3 7 12 6 1 4 8 13 8 1 5 9 14 10 1 = 0.
Notice that row four is a multiple of row one. So, by multilinearity:
det ( 1 2 3 4 5 5 6 7 8 9 10 11 12 13 14 2 4 6 8 10 1 1 1 1 1 ) = 2 ⋅ det ( 1 2 3 4 5 5 6 7 8 9 10 11 12 13 14 1 2 3 4 5 1 1 1 1 1 ) \det\begin{pmatrix}
1 & 2 & 3 & 4 & 5 \\
5 & 6 & 7 & 8 & 9 \\
10 & 11 & 12 & 13 & 14 \\
2 & 4 & 6 & 8 & 10 \\
1 & 1 & 1 & 1 & 1
\end{pmatrix} = 2\cdot\det\begin{pmatrix}
1 & 2 & 3 & 4 & 5 \\
5 & 6 & 7 & 8 & 9 \\
10 & 11 & 12 & 13 & 14 \\
1 & 2 & 3 & 4 & 5 \\
1 & 1 & 1 & 1 & 1
\end{pmatrix} det 1 5 10 2 1 2 6 11 4 1 3 7 12 6 1 4 8 13 8 1 5 9 14 10 1 = 2 ⋅ det 1 5 10 1 1 2 6 11 2 1 3 7 12 3 1 4 8 13 4 1 5 9 14 5 1
And because two rows are the same:
2 ⋅ det ( 1 2 3 4 5 5 6 7 8 9 10 11 12 13 14 1 2 3 4 5 1 1 1 1 1 ) = 2 ⋅ 0 = 0 2\cdot\det\begin{pmatrix}
1 & 2 & 3 & 4 & 5 \\
5 & 6 & 7 & 8 & 9 \\
10 & 11 & 12 & 13 & 14 \\
1 & 2 & 3 & 4 & 5 \\
1 & 1 & 1 & 1 & 1
\end{pmatrix} = 2 \cdot 0 = 0 2 ⋅ det 1 5 10 1 1 2 6 11 2 1 3 7 12 3 1 4 8 13 4 1 5 9 14 5 1 = 2 ⋅ 0 = 0
Additionally, by properties 1 and 5:
det ( v ⃗ 1 ⋮ v ⃗ 2 ) = det ( v ⃗ 1 ⋮ v ⃗ 2 ) + det ( c v ⃗ 2 ⋮ v ⃗ 2 ) = det ( v ⃗ 1 ⋮ v ⃗ 2 ) + c ⋅ det ( v ⃗ 2 ⋮ v ⃗ 2 ) = det ( v ⃗ 1 + c v ⃗ 2 ⋮ v ⃗ 2 ) \begin{align*}
\det\begin{pmatrix}
\vector{v}_1 \\ \vdots \\ \vector{v}_2
\end{pmatrix}
&= \det\begin{pmatrix}
\vector{v}_1 \\ \vdots \\ \vector{v}_2
\end{pmatrix} +
\det\begin{pmatrix}
c\vector{v}_2 \\ \vdots \\ \vector{v}_2
\end{pmatrix} \\
&= \det\begin{pmatrix}
\vector{v}_1 \\ \vdots \\ \vector{v}_2
\end{pmatrix} +
c\cdot\det\begin{pmatrix}
\vector{v}_2 \\ \vdots \\ \vector{v}_2
\end{pmatrix} \\
&= \det\begin{pmatrix}
\vector{v}_1 + c\vector{v}_2 \\ \vdots \\ \vector{v}_2
\end{pmatrix}
\end{align*} det v 1 ⋮ v 2 = det v 1 ⋮ v 2 + det c v 2 ⋮ v 2 = det v 1 ⋮ v 2 + c ⋅ det v 2 ⋮ v 2 = det v 1 + c v 2 ⋮ v 2
Which gives us
Property 6
det ( v ⃗ 1 ⋮ v ⃗ 2 ) = det ( v ⃗ 1 + c v ⃗ 2 ⋮ v ⃗ 2 ) \det\begin{pmatrix}
\vector{v}_1 \\ \vdots \\ \vector{v}_2
\end{pmatrix} =
\det\begin{pmatrix}
\vector{v}_1 + c\vector{v}_2 \\ \vdots \\ \vector{v}_2
\end{pmatrix} det v 1 ⋮ v 2 = det v 1 + c v 2 ⋮ v 2
Diagonal matrices
The derivation for property 4 tells us that:
det ( λ 1 0 ⋱ 0 λ n ) = λ 1 λ 2 ⋯ λ n . \det\begin{pmatrix}
\lambda_1 & & 0 \\
& \ddots & \\
0 & & \lambda_n
\end{pmatrix} = \lambda_1\lambda_2\cdots\lambda_n. det λ 1 0 ⋱ 0 λ n = λ 1 λ 2 ⋯ λ n .
However, this also holds for upper and lower triangular matrices.
det ( λ 1 0 ⋱ ∗ λ n ) = det ( λ 1 ∗ ⋱ 0 λ n ) = λ 1 λ 2 ⋯ λ n . \det\begin{pmatrix}
\lambda_1 & & 0 \\
& \ddots & \\
* & & \lambda_n
\end{pmatrix} =
\det\begin{pmatrix}
\lambda_1 & & * \\
& \ddots & \\
0 & & \lambda_n
\end{pmatrix} = \lambda_1\lambda_2\cdots\lambda_n. det λ 1 ∗ ⋱ 0 λ n = det λ 1 0 ⋱ ∗ λ n = λ 1 λ 2 ⋯ λ n .
Because trust me bro, that’s how it works. Or here, okay, I give a proof but I cheat you a little bit.
det ( λ 1 a b 0 λ 2 c 0 0 λ 3 ) = λ 1 λ 2 λ 3 det ( 1 a λ 1 b λ 1 0 1 c λ 1 0 0 1 ) = λ 1 λ 2 λ 3 det ( 1 a λ 1 b λ 1 0 1 0 0 0 1 ) R 2 − c λ 1 R 3 = λ 1 λ 2 λ 3 det ( 1 a λ 1 0 0 1 0 0 0 1 ) R 1 − b λ 1 R 3 = λ 1 λ 2 λ 3 det ( 1 0 0 0 1 0 0 0 1 ) R 1 − b λ 1 R 2 = λ 1 λ 2 λ 3 \begin{align*}
\det\begin{pmatrix}
\lambda_1 & a & b \\
0 & \lambda_2 & c \\
0 & 0 & \lambda_3
\end{pmatrix}
&= \lambda_1 \lambda_2 \lambda_3
\det\begin{pmatrix}
1 & \frac{a}{\lambda_1} & \frac{b}{\lambda_1} \\
0 & 1 & \frac{c}{\lambda_1} \\
0 & 0 & 1
\end{pmatrix} \\
&= \lambda_1 \lambda_2 \lambda_3
\det\begin{pmatrix}
1 & \frac{a}{\lambda_1} & \frac{b}{\lambda_1} \\
0 & 1 & 0 \\
0 & 0 & 1
\end{pmatrix}
& \boxed{R_2 - \frac{c}{\lambda_1}R_3} \\
&= \lambda_1 \lambda_2 \lambda_3
\det\begin{pmatrix}
1 & \frac{a}{\lambda_1} & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{pmatrix}
& \boxed{R_1 - \frac{b}{\lambda_1}R_3} \\
&= \lambda_1 \lambda_2 \lambda_3
\det\begin{pmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{pmatrix}
& \boxed{R_1 - \frac{b}{\lambda_1}R_2} \\
&= \lambda_1 \lambda_2 \lambda_3
\end{align*} det λ 1 0 0 a λ 2 0 b c λ 3 = λ 1 λ 2 λ 3 det 1 0 0 λ 1 a 1 0 λ 1 b λ 1 c 1 = λ 1 λ 2 λ 3 det 1 0 0 λ 1 a 1 0 λ 1 b 0 1 = λ 1 λ 2 λ 3 det 1 0 0 λ 1 a 1 0 0 0 1 = λ 1 λ 2 λ 3 det 1 0 0 0 1 0 0 0 1 = λ 1 λ 2 λ 3 R 2 − λ 1 c R 3 R 1 − λ 1 b R 3 R 1 − λ 1 b R 2
By induction, this works for n n n lambdas. And the same will apply for lower triangle matrices, just upside-down.
Applying different properties…
Consider a 2 × 2 2\times2 2 × 2 matrix ( a b c d ) \begin{pmatrix}
a & b \\ c & d
\end{pmatrix} ( a c b d ) . We know that its determinant is a d − b c ad-bc a d − b c . We can also derive this by applying the properties of the determinant.
det ( a b c d ) = det ( a e ⃗ 1 + b e ⃗ 2 c e ⃗ 1 + d e ⃗ 2 ) = det ( a e ⃗ 1 c e ⃗ 1 ) + det ( a e ⃗ 1 d e ⃗ 2 ) + det ( b e ⃗ 2 c e ⃗ 1 ) + det ( b e ⃗ 2 d e ⃗ 2 ) = a c det ( e ⃗ 1 e ⃗ 1 ) + a d det ( e ⃗ 1 e ⃗ 2 ) + b c det ( e ⃗ 2 e ⃗ 1 ) + b d det ( e ⃗ 2 e ⃗ 2 ) = a c ( 0 ) + a d ( 1 ) + b c ( − 1 ) + b d ( 0 ) = a d − b c \begin{align*}
\det\begin{pmatrix}
a & b \\ c & d
\end{pmatrix}
&= \det\begin{pmatrix}
a \vector{\mathbf{e}}_1 + b\vector{\mathbf{e}}_2 \\
c \vector{\mathbf{e}}_1 + d \vector{\mathbf{e}}_2
\end{pmatrix} \\
&= \det\begin{pmatrix}
a \vector{\mathbf{e}}_1 \\ c \vector{\mathbf{e}}_1
\end{pmatrix} + \det\begin{pmatrix}
a \vector{\mathbf{e}}_1 \\ d \vector{\mathbf{e}}_2
\end{pmatrix} + \det\begin{pmatrix}
b \vector{\mathbf{e}}_2 \\ c \vector{\mathbf{e}}_1
\end{pmatrix} + \det\begin{pmatrix}
b \vector{\mathbf{e}}_2 \\ d \vector{\mathbf{e}}_2
\end{pmatrix} \\
&= ac \det\begin{pmatrix}
\vector{\mathbf{e}}_1 \\ \vector{\mathbf{e}}_1
\end{pmatrix} + ad \det\begin{pmatrix}
\vector{\mathbf{e}}_1 \\ \vector{\mathbf{e}}_2
\end{pmatrix} + bc \det\begin{pmatrix}
\vector{\mathbf{e}}_2 \\ \vector{\mathbf{e}}_1
\end{pmatrix} + bd \det\begin{pmatrix}
\vector{\mathbf{e}}_2 \\ \vector{\mathbf{e}}_2
\end{pmatrix} \\
&= ac (0) + ad (1) + bc (-1) + bd (0) \\
&= ad - bc
\end{align*} det ( a c b d ) = det ( a e 1 + b e 2 c e 1 + d e 2 ) = det ( a e 1 c e 1 ) + det ( a e 1 d e 2 ) + det ( b e 2 c e 1 ) + det ( b e 2 d e 2 ) = a c det ( e 1 e 1 ) + a d det ( e 1 e 2 ) + b c det ( e 2 e 1 ) + b d det ( e 2 e 2 ) = a c ( 0 ) + a d ( 1 ) + b c ( − 1 ) + b d ( 0 ) = a d − b c
We can also do the same thing for a 3 × 3 3\times3 3 × 3 matrix ( a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 ) \begin{pmatrix}
a_{11} & a_{12} & a_{13} \\
a_{21} & a_{22} & a_{23} \\
a_{31} & a_{32} & a_{33}
\end{pmatrix} a 11 a 21 a 31 a 12 a 22 a 32 a 13 a 23 a 33 .
det ( a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 ) = det ( a 11 e ⃗ 1 a 12 e ⃗ 2 a 13 e ⃗ 3 a 21 e ⃗ 1 a 22 e ⃗ 2 a 23 e ⃗ 3 a 31 e ⃗ 1 a 32 e ⃗ 2 a 33 e ⃗ 3 ) = decomposes to 27 terms... \begin{align*}
\det\begin{pmatrix}
a_{11} & a_{12} & a_{13} \\
a_{21} & a_{22} & a_{23} \\
a_{31} & a_{32} & a_{33}
\end{pmatrix} &= \det\begin{pmatrix}
a_{11}\vector{\mathbf{e}}_1 & a_{12}\vector{\mathbf{e}}_2 & a_{13}\vector{\mathbf{e}}_3 \\
a_{21}\vector{\mathbf{e}}_1 & a_{22}\vector{\mathbf{e}}_2 & a_{23}\vector{\mathbf{e}}_3 \\
a_{31}\vector{\mathbf{e}}_1 & a_{32}\vector{\mathbf{e}}_2 & a_{33}\vector{\mathbf{e}}_3
\end{pmatrix} \\
&= \text{decomposes to 27 terms...}
\end{align*} det a 11 a 21 a 31 a 12 a 22 a 32 a 13 a 23 a 33 = det a 11 e 1 a 21 e 1 a 31 e 1 a 12 e 2 a 22 e 2 a 32 e 2 a 13 e 3 a 23 e 3 a 33 e 3 = decomposes to 27 terms...
Notice how, for the 2 × 2 2\times2 2 × 2 , two of the terms cancel out to zero. Here, for a 3 × 3 3\times3 3 × 3 , only six terms survives. The determinant of the remaining 21 terms will be zero.
det ( a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 ) = a 11 a 12 a 13 det ( e ⃗ 1 e ⃗ 2 e ⃗ 3 ) + a 11 a 23 a 32 det ( e ⃗ 1 e ⃗ 3 e ⃗ 2 ) + a 12 a 21 a 33 det ( e ⃗ 2 e ⃗ 1 e ⃗ 3 ) + a 12 a 23 a 31 det ( e ⃗ 2 e ⃗ 3 e ⃗ 1 ) + a 13 a 21 a 32 det ( e ⃗ 3 e ⃗ 1 e ⃗ 2 ) + a 13 a 22 a 31 det ( e ⃗ 3 e ⃗ 2 e ⃗ 1 ) \begin{split}
\det\begin{pmatrix}
a_{11} & a_{12} & a_{13} \\
a_{21} & a_{22} & a_{23} \\
a_{31} & a_{32} & a_{33}
\end{pmatrix}
&= a_{11} a_{12} a_{13} \det\begin{pmatrix}
\vector{\mathbf{e}}_1 \\ \vector{\mathbf{e}}_2 \\ \vector{\mathbf{e}}_3
\end{pmatrix}
+ a_{11} a_{23} a_{32} \det\begin{pmatrix}
\vector{\mathbf{e}}_1 \\ \vector{\mathbf{e}}_3 \\ \vector{\mathbf{e}}_2
\end{pmatrix}
+ a_{12} a_{21} a_{33} \det\begin{pmatrix}
\vector{\mathbf{e}}_2 \\ \vector{\mathbf{e}}_1 \\ \vector{\mathbf{e}}_3
\end{pmatrix} \\
&\quad+ a_{12} a_{23} a_{31} \det\begin{pmatrix}
\vector{\mathbf{e}}_2 \\ \vector{\mathbf{e}}_3 \\ \vector{\mathbf{e}}_1
\end{pmatrix}
+ a_{13} a_{21} a_{32} \det\begin{pmatrix}
\vector{\mathbf{e}}_3 \\ \vector{\mathbf{e}}_1 \\ \vector{\mathbf{e}}_2
\end{pmatrix}
+ a_{13} a_{22} a_{31} \det\begin{pmatrix}
\vector{\mathbf{e}}_3 \\ \vector{\mathbf{e}}_2 \\ \vector{\mathbf{e}}_1
\end{pmatrix}
\end{split} det a 11 a 21 a 31 a 12 a 22 a 32 a 13 a 23 a 33 = a 11 a 12 a 13 det e 1 e 2 e 3 + a 11 a 23 a 32 det e 1 e 3 e 2 + a 12 a 21 a 33 det e 2 e 1 e 3 + a 12 a 23 a 31 det e 2 e 3 e 1 + a 13 a 21 a 32 det e 3 e 1 e 2 + a 13 a 22 a 31 det e 3 e 2 e 1
Then, after some careful calculation using the properties, we will see that the we end up with our expression for the determinant for a 3 × 3 3\times3 3 × 3 matrix.
det ( a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 ) = a 11 det ( a 22 a 23 a 32 a 33 ) − a 12 det ( a 21 a 23 a 31 a 33 ) + a 13 det ( a 21 a 22 a 31 a 32 ) \begin{align*}
\det\begin{pmatrix}
a_{11} & a_{12} & a_{13} \\
a_{21} & a_{22} & a_{23} \\
a_{31} & a_{32} & a_{33}
\end{pmatrix}
&= a_{11} \det\begin{pmatrix}
a_{22} & a_{23} \\ a_{32} & a_{33}
\end{pmatrix}
- a_{12} \det\begin{pmatrix}
a_{21} & a_{23} \\ a_{31} & a_{33}
\end{pmatrix}
+ a_{13} \det\begin{pmatrix}
a_{21} & a_{22} \\ a_{31} & a_{32}
\end{pmatrix}
\end{align*} det a 11 a 21 a 31 a 12 a 22 a 32 a 13 a 23 a 33 = a 11 det ( a 22 a 32 a 23 a 33 ) − a 12 det ( a 21 a 31 a 23 a 33 ) + a 13 det ( a 21 a 31 a 22 a 32 )
Then, what about R 4 \R^4 R 4 and beyond… the number of terms that it come out of matrix will grow exponentially. In fact, the only way to do this in a sane manner, is to turn to our good friend Gauss.