Take your original augmented matrix, and put the coefficient matrix into RREF. Then you get something like

(BLOCK OF 1's & 0's WITHPlease observe that the lower right-hand corner now plays the part of the compatibility conditions which must be satisfied. All of those linear "fragments" must be equal to 0 if the original system has solutions. Now if these are 0, we can "read off" solutions in much the same manner as the example. The block labeled|JJJJJJ UU UU NN N K K|Linear stuff ) (THE 1's MOVING DOWN AND|JJ UU UU N N N KK|from original) (TO THE RIGHT.|JJJJ UUUUUU N NN K K|right sides ) (--------------------------------------------------------------------) ( MAYBE HERE SEVERAL|More such ) ( ROWS OF 0'S|linear stuff )

**Problem 3**

Are the vectors (4,3,2) and (3,2,3) and (-4,4,3) and (5,2,1) in
R^{3} linearly independent? Now I wrote the vector
equation:

A(4,3,2)+B(3,2,3)+C(-4,4,3)+D(5,2,1)=0 (this is (0,0,0) for this
instantiation of "linear independence") which gives me the system:

4A+3B-4C+5D=0 (from the first components of the vectors)

3A+2B+4C+2D=0 (from the second components of the vectors)

2A+3B+3C+1D=0 (from the third components of the vectors)

and therefore would need to row reduce

( 4 3 -4 5 ) ( 3 2 4 2 ) ( 2 3 3 1 )I started to do this, but then ... thought a bit. My goal was to get the RREF of this matrix, and use that to argue about whether the original system had solutions other than the trivial solution.

What can these RREF's look like? Let me write all of the RREF's possible whose first column (as here) has some entry which is not zero.

( 1 0 0 * ) ( 1 0 * 0 ) ( 1 0 * * ) ( 1 * 0 0 ) ( 1 * 0 * ) ( 1 * * 0 ) ( 1 * * * ) ( 0 1 0 * ) ( 0 1 * 0 ) ( 0 1 * * ) ( 0 0 1 0 ) ( 0 0 1 * ) ( 0 0 0 1 ) ( 0 0 0 0 ) ( 0 0 1 * ) ( 0 0 0 1 ) ( 0 0 0 0 ) ( 0 0 0 1 ) ( 0 0 0 0 ) ( 0 0 0 0 ) ( 0 0 0 0 )In all of these 3 by 4 matrices, the entry * stands for something which could be any number, zero or non-zero. I hope I haven't missed one! In every one of the matrices above, there will be non-trivial solutions to the related homogeneous system. For example, in the first matrix, D is free to be any value, and the equations can be made correct by selecting the correct values of A and B and C. In the second matrix, D=0 certainly (because of the last row) but C can be any value, and A and B can be selected to make the other two equations correct. In each row where there are *'s, those variables are free, and the leading 1 in the row represents a variable whose value can be chosen to make the corresponding equation correct. Therefore all of these RREF's represent homogeneous systems with non-trivial solutions. And notice that if the first column had been all 0's then A could have any value. So, in fact, a homogeneous system having 4 variables and 3 equations will always have a non-trivial solution.

A homogeneous system with more variables than equations always has an infinite number of non-trivial solutions. |

Many problems in engineering and science turn out to be successfully
modeled by systems of linear equations. My examples so far have mostly
been"inspired" by the things I believe you will see in studying ODE's
and PDE's, and have also been influenced by some of the primary
objects in numerical analysis. The textbook has a wonderful diagram, a
sort of flow chart, for solving linear systems on p. 366. I think our
discussions in class have been sufficient for you to verify the
diagram, and the diagram contains just about everything you
*needs* to know about the theory behind solution of systems of
linear equations. The only remaining definition you need is that of
the **rank** of a matrix. The rank is the number of non-zero rows
in the RREF of the matrix.

Here is my version of the diagram in HTML:

ForLet try to offer a gloss on this diagram, which maybe looks simple but covers so many different situations which you will enounter.mlinear equations innunknowns AX=B Two cases: B=0 and B not 0. Let rank(A)=r. AX=0 | | | \ / v ----------------------------------- | | | | | | \ / \ / v v Unique sol'n: X=0 Infinite number of sol'ns. rank(A)=n rank(A)<n, n-r arbitrary parameters in the sol'n

AX=0, B not 0 | | | \ / v ----------------------------------- | | | | | | \ / \ / v v ConsistentInconsistentrank(A)=rank(A|B)rank(A)<rank(A|B)| | | \ / v ---------------------------------- | | | | | | \ / \ / v vUnique solution Infinite number of sol'ns rank(A)=n rank(A)<n-r arbitrary parameters in the sol'n

gloss 1. [Linguistics][Semantics] a. an explanatory word or phrase inserted between the lines or in the margin of a text.First,

AX=0: m equations in n unknowns; B=0 | |
---|---|

I | II |

Unique sol'n: X=0 rank(A)=n |
Infinite number of solutions. rank(A)<n, n-r arbitrary parameters in the solutions |

When B is not zero then we've got:

AX=0: m equations in n unknowns; B not 0 | ||
---|---|---|

Consistent rank(A)=rank(A|B) | Inconsistent rank(A)<rank(A|B) | |

III | IV | V |

Unique solution rank(A)=n |
Infinite number of solutions
rank(A)<n-r arbitrary parameters in the solutions |
No solutions |

By the way, I do not recommend that you memorize this information. No one I know has done this, not even the most compulsive. But everyone I know who uses linear algebra has this installed in their brains. As I mentioned in class, I thought that a nice homework assignment would be for students to find examples of each of these (in fact, there have already been examples of each of these in the lectures!). The problem with any examples done "by hand" is that they may not reflect reality. To me, reality might begin with 20 equations in 30 unknowns, or maybe 2,000 equations ....

So I just gave 6 examples which we analyzed in class. These examples
are (I hope!) *simple* (m=number of equations; n=# of variables;
r=rank(A), where A is the coefficient matrix):

- 2x+3y=0

m=1; n=2; r=1. We are in**case I**. The solutions are x=[3/2]t and y=-[2/3]t, or all linear combinations of ([3/2],-[2/3]), a basis of the 1 dimensional solution space. - 2x+3y=0

-5x+7y=0

m=2; n=2; r=2. r=2 since( 2 3)~( 1 3/2)~( 1 0 ) (-5 7) ( 0 29/2) ( 0 1 )

I think you could already have seen that the rows of this 2-by=2 matrix were linearly independent just by looking at it (don't try "just looking at" a random 500-by-300 matrix!), so r=2. This is**case II**. There is a unique solution, x=0 and y=0, the trivial solution. - 2x+3y=0

-5x+7y=0

4x+5y=0

m=2; n=2; r=2. r=2 since r is at least 2 using the previous row reduction, and r can be at most 2 since the number of variables is 2. Again, this is**case II**. There is a unique solution, x=0 and y=0, the trivial solution. - 2x+3y=1

m=1; n=2; r=1. We are in**case IV**. The solutions are (-1,1)+t([3/2],-[2/3]), an infinite 1-parameter family of solutions. Where did this come from? Here we first searched for a*particular solution*of 2x+3y=1, and we*guessed*x_{p}=-1 and y_{p}=1. (Don't try this with a big system: use row reduction or, better, use a computer!). Then I looked at 2x+3y=0 and used list of all solutions of the*associated homogeneous system*given in example A, above: x_{h}=[3/2]t and y_{h}=-[2/3]t. Now what? We use*linearity*:

2x_{p}+3y_{p}=1

2x_{h}+3y_{h}=0 (a list of all solutions)

2(x_{p}+x_{h})+3(x_{p}+y_{h})=1. - 2x+3y=1

-5x+7y=2

m=1; n=2; r=2. Since here rank(A)=rank(A|B) and further row reduction shows that( 2 3 | 1)~( 1 3/2 | 1/2)~( 1 0 | 1/2 - (3/2)·(9/29)) (-5 7 | 2) ( 0 29/2 | 9/2) ( 0 1 | 9/29 )

we are in**case III**, with exactly one solution which row reduction has produced: x=1/2-(3/2)·(9/29)) and y=9/29. - 2x+3y=1

-5x+7y=2

4x+5y=3

Now m=1; n=2; r=2. But look:( 2 3 | 1) ( 1 3/2 | 1/2) ( 1 0 | 1/2 - (3/2)·(9/29)) (-5 7 | 2)~( 0 29/2 | 9/2)~( 0 1 | 9/29 ) ( 4 5 | 3) ( 0 -1 | 1 ) ( 0 0 | 1-(9/29) )

I am lazy and I know that 1-(9/29) is not 0, so the row reduction showed that rank(A)=2<rank(A|B)=3:**case V**, with no solutions.

**Heuristic, the word**

adj. 1. allowing or assisting to discover. 2. [Computing] proceeding to a solution by trial and error. "heuristic method"[Education] a system of education under which pupils are trained to find out things for themselves.