I had students compute the entries of the adjoint of A, and we checked
(at least a little bit!) that the result was A^{-1}. I decided
for the purposes of this diary to have `Maple` find minors,
then get determinants, then adjust the signs, and then take the
transpose: the result should be (after division of the entries by
det(A)) the inverse. And it was. I should mention that it took me
longer to write the darn `Maple` code than it did to have
people do the computations in class. The `#` indicates a
comment in `Maple` and is ignored by the program.

> with(linalg): Warning, the protected names norm and trace have been redefined and unprotected > A:=matrix(3,3,[1,-2,3,2,1,0,2,1,3]); #I "verified" Cramer's rule in a specific case. A similar computation is shown above done bydefines a matrix, A.[1 -2 3] [ ] A := [2 1 0] [ ] [2 1 3] > det(A); #Finds the determinant15 > minor(A,2,1); #This gets the (2,1) minor, apparently.[-2 3] [ ] [ 1 3] >B:=matrix(3,3,[seq(seq((-1)^(i+j)*det(minor(A,i,j)),j=1..3),i=1..3)]); #This mess, when parsed correctly, creates almost the adjoint of A.[ 3 -6 0] [ ] B := [ 9 -3 -5] [ ] [-3 6 5] > evalm(A&*transpose(B)); #We need to divide by 15, which isdet(A). # evalmand&* domatrix multiplication.[15 0 0] [ ] [ 0 15 0] [ ] [ 0 0 15] > inverse(A); [1/5 3/5 -1/5] [ ] [-2/5 -1/5 2/5 ] [ ] [ 0 -1/3 1/3 ] > evalm((1/15)*transpose(B)); [1/5 3/5 -1/5] [ ] [-2/5 -1/5 2/5 ] [ ] [ 0 -1/3 1/3 ] > B:=matrix(3,1,[y1,y2,y3]); [y1] [ ] B := [y2] [ ] [y3] > evalm(inverse(A)&*B); #How to solve the matrix equationAX=B #Left multiply byA^{-1}. [ y1 3 y2 y3 ] [ ---- + ---- - ---- ] [ 5 5 5 ] [ ] [ 2 y1 y2 2 y3] [- ---- - ---- + ----] [ 5 5 5 ] [ ] [ y2 y3 ] [ - ---- + ---- ] [ 3 3 ] > A2:=matrix(3,3,[1,y1,3,2,y2,0,2,y3,3]); #Preparing for Cramer's rule[1 y1 3] [ ] A2 := [2 y2 0] [ ] [2 y3 3] > det(A2)/det(A); #The same result as the second entry inA^{-1}. 2 y1 y2 2 y3 - ---- - ---- + ---- 5 5 5

I have never *used* Cramer's rule in any significant
situation. But, again, should you need such an explicit formula, you
should be aware that it exists. I also mention that, using my
`Maple` on a fairly new computer, over half a second was needed
to compute the inverse of a 5-by-5 symbolic matrix. So these things
should be done with some care. Sigh.

So the stuff last time was really background. I don't think you need
to know everything I discussed, but I do honestly believe that
engineers should have some feeling for the *real definition*,
even if it is very painful (and it is, indeed it is). In any case,
*you need to change your paradigm about
determinants!* since n=2 and n=3 are too darn simple to give
you intuition for the general case.

The Oxford English Dictionary lists the first appearance of
*paradigm* in 1483 when it meant "an example or pattern", as it
does today.

But you must know for the purposes of this course some standard computational methods of evaluating determinants. So I'll tell you about row operations and cofactor exponasions.

Row operations and their effects on determinants | ||
---|---|---|

The row operation | What it does to det | |

Multiply a row by a constant | Multiplies det by that constant | |

Interchange adjacent rows | Multiplies det by -1 | |

Add a row to another row | Doesn't change det |

**Examples**

Suppose A is this matrix:

( -3 4 0 18 ) ( 2 -9 5 6 ) ( 22 14 -3 -4 ) ( 4 7 22 5 )Then the following matrix has determinant twice the value of det(A):

( -6 8 0 36 ) ( 2 -9 5 6 ) ( 22 14 -3 -4 ) ( 4 7 22 5 )

Also, the following matrix has determinant -det(A)

( -3 4 0 18 ) ( 22 14 -3 -4 ) ( 2 -9 5 6 ) ( 4 7 22 5 )

The following matrix has the same value of determinant as det(A)

( -3 4 0 18 ) ( 2 -9 5 6 ) ( 24 5 2 2 ) ( 4 7 22 5 )

**Silly examples (?)**

Look:

( 1 2 3 ) ( 2 3 4 ) det( 4 5 6 )=det( 3 3 3 ) (rowNow if two rows are identical, the det is 0, since interchanging them both changes the sign and leaves the matrix unchanged. So since det(A)=-det(A), det(A) must be 0._{2}-row_{1}) ( 7 8 9 ) ( 3 3 3 ) (row_{3}-row_{2})

Look even more at this:

( 1 4 9 16 ) ( 1 4 9 16 ) ( 1 4 9 16 ) det( 25 36 49 64 )=det( 24 32 40 48 ) (rowso since the result has two identical rows, the deteminant of the original matrix must be 0._{2}-row_{1})=det( 24 32 40 48 ) ( 81 100 121 144 ) ( 56 64 72 80 ) (row_{3}-row_{2}) ( 32 32 32 32 ) (row_{3}-row_{2}) ( 169 196 225 256 ) ( 88 96 104 112 ) (row_{4}-row_{3}) ( 32 32 32 32 ) (row_{3}-row_{2})

There are all sorts of tricky things one can do with determinant evaluations, if you want. Please notice that the linear systems gotten from, say, the finite element method applied to important PDE's definitely give coefficient matrices which are not random: they have lots of structure. So the tricky things above aren't that ridiculous.

**Use row operations to ...**

One standard way of evaluating determinants is to use row operations
to change a matrix to either upper or lower triangular form (or even
diagonal form, if you are lucky). Then the determinant will be the
product of the diagonal terms. Here I used row operations (actually I
had `Maple` use row operations!) to change this random (well, the
entries were produced sort of randomly by `Maple`) to an
upper-triangular matrix.

[1 -1 3 -1] And now I use multiples of the first row to create 0's [4 4 3 4] below the (1,1) entry. The determinant won't change: [3 2 0 1] I'm not multiplying any row in place, just adding [3 1 3 3] multiples of rowThe determinant of the original matrix must be 1·8·(-27/8)·(22/9). Sigh. This should be -66, which is what_{1}to other rows. [1 -1 3 -1] And now multiples of the second row to create 0's [0 8 -9 8] below the (2,2) entry. [0 5 -9 4] [0 4 -6 6] [1 -1 3 -1] Of course, multiples of the third row to create [0 8 9 8] 0's below the (3,3) entry. [0 0 -27/8 -1] [0 0 -3/2 2] [1 -1 3 -1 ] Wow, an upper triangular matrix! [0 8 -9 8 ] [0 0 -27/8 -1 ] [0 0 0 22/9]

**Minors**

If A is an n-by-n matrix, then the (i,j)^{th} minor of A is
the (n-1)-by-(n-1) matrix obtained by throwing away the
i^{th} row and j^{th} column of A.
For example, if A is

[1 -1 3 -1] [4 4 3 4] [3 2 0 1] [3 1 3 3]Then the (2,3) minor is gotten by deleting the second row and the third column:

>minor(A,2,3); [1 -1 -1] [3 2 1] [3 1 3]Of course I had

**Evaluating determinants by cofactor expansions**

This field has a bunch of antique words. Here is another. It turns out
that the determinant of a matrix can be evaluated by what are called
*cofactor expansions*. This is rather weird. When I've gone
through the proof that cofactor expansions work, I have not really
felt enlightened. So I will not discuss proofs. Here is the
idea. Suppose A is an n-by-n matrix. Each
(i,j) position in this n-by-n matrix has an associated minor which
I'll call M_{ij}. Then:

- For any i,
det(A)=
**SUM**_{j=1}^{n}(-1)^{i+j}a_{ij}det(M_{ij}). This is called expanding along the i^{th}row. - For any j,
det(A)=
**SUM**_{i=1}^{n}(-1)^{i+j}a_{ij}det(M_{ij}). This is called expanding along the j^{th}column.

Here: let's try an example. Suppose A is

[1 -1 3 -1] [4 4 3 4] [3 2 0 1] [3 1 3 3]as before. I asked

Here are the results:

> det(minor(A,1,1)); -3 > det(minor(A,1,2)); 6 > det(minor(A,1,3)); -16 > det(minor(A,1,4)); -21Remember that the first row is [1 -1 3 -1] Now the sum, with the +/- signs, is

**Recursion and strategy**

You should try some examples, of course. This is about the only way I
know to learn this stuff. If I had to give a short definition of
determinant, and if I were allowed to use recursion, I think that I
might write the following:

**Input** A, an n-by-n matrix.

If n=1, then det(A)=a_{11}

If n>1, then
det(A)=**SUM**_{j=1}^{n}(-1)^{j+1}a_{1j}det(M_{1j})
where M_{1j} is the (n-1)-by-(n-1) matrix obtaining by
deleting the first row and the j_{th} column.

This is computing det(A) by repeatedly expanding along the first
row. I've tried to write such a program, and if you have the time and
want some amusement, you should try this also. The recursive nature
rather quickly fills up the stack (n! is big big
big) so this isn't too practical. But there are
certainly times when the special form of a matrix allows quick and
efficient computation by cofactor expansions.

**More formulas**

You may remember that we had a

**A decision problem** Given an n-by-n matrix, A, how can
we decide if A is invertible?

Here is how to decide:

A is invertible exactly when
det(A) is *not* 0.

Whether this is practical depends on the
situation.

There was also a

**Computational problem** If we know A is invertible, what is the
best way of solving AX=B? How can we create A^{-1}
efficiently?

Well, this has an answer, too. The answer is on page 383 of the
text. The inverse of A is the constant (1/det(A)) multiplied by the
*adjoint of A*. I have to look this up. The adjoint is the
transpose (means: flip over the main diagonal, or, algebraically,
interchange i and j) of the matrix whose entries are
(-1)^{i+j}det(M_{ij}).

I think this is hideous and the only example I have seen worked out in detail (enough to be convincing!) is n=2. So here goes:

A is (a b) and MIf you mentally multiply this matrix by A you will get I_{11}=d and M_{12}=c and M_{21}=b and M_{22}=a (c d) Then the adjoint (put in +/- signs, put in transpose) is ( d -b) (-c a). Since the det is ad-bc, the inverse must be ( d/(ad-cd) -b/(ad-bc) ) (-c/(ad-cd) a/(ad-bc) )

So there will be times you might need to decide between using an algorithmic approach and trying to get a formula. Let me show you a very simple example where you might want a formula. This example itself is perhaps not too realistic, but maybe you can see what real examples might look like.

Suppose we need to understand the linear system

2x+7y=6

*Q*x+3y=*Q*

Well, if the parameter *Q* is 0, then (second equation) y=0 and
so (first equation) x=3. We could think of *Q* as some sort of
control or something. I tried inadequately to convey a sort of
physical problem that this might model, but the effort was perhaps not
totally successful. What happens to x and y when we vary *Q*,
for example, move *Q* up from 0 to a small positive number? I
don't think this is clear. But we can in fact find a formula for x and
y. This is sort of neat, actually. You may remember such a formula
from high school, even.

det( 6 7 ) det( 2 6 ) (so that x=(18-7Q3 ) (QQ) x= ---------- y= ---------- det( 2 7 ) det( 2 7 ) (Q3 ) (Q3 )

**Cramer's Rule**

I think this is Theorem 8.23 (page 392) of the text. It discusses a
formula for solving AX=B where A is an n-by-n matrix with det(A) not
equal to 0, and B is a known n-by-1 matrix, and X is an n-by-1 matrix
of unknowns. Then x_{j} turns out to be
det(A_{j})/det(A), where A_{j} is the matrix obtained
by replacing the j^{th} column of A by B.

Well, the QotD was computing part of an example when n=3 of this.

3x-5y+z=2

2x+5y-z=0

x-y+z=7

What is z? According to Cramer's Rule, z is

(3 -5 2) det{2 5 0) (1 -1 7) ------------ (3 -5 1) det{2 5 -1) (1 -1 1)I think I computed the determinant on the top in several ways, once with row operations, and once by cofactor expansions. Both gave 161. And, of course, there is this method:

>det(matrix(3,3,[3,-5,2,2,5,0,1,-1,7])); 161The bottom determinant was what I asked people to compute. It is

> det(matrix(3,3,[3,-5,1,2,5,-1,1,-1,1])); 20uhhhh ... twenty, yes, that's it, twenty. That's the answer to the QotD. Or, actually, we can check another way:

> A:=matrix(3,4,[3,-5,1,2,2,5,-1,0,1,-1,1,7]); [3 -5 1 2] [ ] A := [2 5 -1 0] [ ] [1 -1 1 7] > rref(A); [1 0 0 2/5] [ ] [ 29 ] [0 1 0 -- ] [ 20 ] [ ] [ 161] [0 0 1 ---] [ 20 ]This is the augmented matrix corresponding to the original system, and so z must be 161/20.

I first merely stated two facts which are sometimes really useful in computing determinants:

- If A and B are n-by-n matrices, then det(AB)=det(A)det(B).

**Comments**This is best understood if one realizes that det also measures a volume distortion factor, so A in effect "maps" R^{n}to R^{n}by matrix multiplication, and det(A), it turns out, is the way n-dimensional volumes are stretched. So multiplying by A and then by B concatenates the effects. Notice, as we observed in class, that det(A+B) is*not*... Mr. Shah's example: A=I_{2}and BA=2I_{2}so A+B=3I_{2}and det(A)=1 and det(B)=4 and det(A+B)=9. - If A is an n-by-n matrix, with A
^{t}its transpose, then det(A)=det(A^{t}).

**Comments**Here the transpose is the result of flipping the matrix over its main diagonal: rows and columns get interchanged. Algebraically, the ij^{th}entry in A^{t}is the ji^{th}entry in A. The reason these determinants are the same is that when the flip is done, the signs on each rook arrangement is preserved. (-1)^{#}counts stuff in each upper-right quadrant. But each upper-right quadrant is counted exactly when the matrix element is in the lower-left quadrant of the other element. And transposing lets upper right change to lower left. Whew -- this is actually a correct explanation, but perhaps not totally clear!