Math 192 diary, part 2, fall 2005:

Monday, December 12

Volunteers will go over various final exam problems.

Thursday, December 8

Mr. Scheinberg answered questions about the homework problems in the syllabus for the last four sections of the text. We looked at a Lissajous curve and some bad puns were made about the name Lissajous.

On Monday I hope that students will do problems from two previous final exams. I will have office hours on Friday, December 16, and Monday, December 19, and be available for a review session from 7 to 9 PM on Tuesday, December 20. The final exam is at noon on December 21 in Hill 525.

Wednesday, December 7

I drew some polar coordinate curves: I got a formula for arc length (from the parametric formula) and checked it on a circle.
STARTING THETAENDING THETAsqrt{r2+(dr/dtheta)2} dtheta

I got a formula for area and checked it sort of on a circle. The formula was derived by looking at circular sectors and seeing how much area they had.

Then I considered a spool of thread with 50 yards of thread. We found the length of a cardioid, r=A(1-sin(theta)). We needed to use a trig identity. I think it was 4A, so the thread could enclose a cardioid with A=12.5. We then found the area of such a cardioid, and this was A2(3Pi/2). With A=12.5, we saw that such a spool would make a very large Valentine!

Read the text on parametric curves and do problems, please. Read the text on polar coordinates and do problems, please. Also I will ask Mr. Scheinberg to answer questions about these four sections of the text.
Here is
information about the final exam. Please think about when we should have a review session or I should have office hours. I hope students will do problems from the last two final exams I gave in Math 192 on Monday, the last class.

Monday, December 5

  1. going constantly from one subject to another, esp. in a half-hearted way.
  2. disconnected; unmethodical; superficial.
The instructor gave a rather desultory (and disconnected) presentation of some parametric curve material.

He showed some parametric curves from book Normal Accidents by Charles Perrow: pictures of ship collision tracks.

I described my "favorite" parametric curve: (x=1/(1+t^2),y=t^3-t).

I wanted to do calculus with parametric curves. The object was to get formulas for the slope of the tangent line to a parametric curve and for the length of a parametric curve.

If x=f(t) and y=g(t), then f(t+delta t) is approximately f(t)+f'(t)delta t +M (delta t)^2 (this is from Taylor's Theorem). Also g(t+delta t) is approximately g(t)+g'(t)delta t +M (delta t)^2. The reason for the different "M" is that these are different functions to which Taylor's Theorem is being applied. Then we analyze the slope of the secant line, (delta y)/(delta x): this is (g(t)+g'(t)delta t +M -g(t))/(f(t)+f'(t)delta t +M (delta t)^2) and we see that the limit as delta t --> 0 must be g'(t)/f'(t), and --> --this must be the slope of the tangent line, dy/dx. The formula in the book, developed in a different way using the Chain Rule, is also nicely stated in classical Leibniz notation as (dy/dt)/(dx/dt).

I applied this to find the equation of a line tangent to my favorite curve, x=t^3-t, y=1/(1+t^2), when t=2. I found a point on the curve and I found the slope of the tangent line, and then found the line. It sloped slightly down, just as the picture suggested. Then I found the angle between the parts of the curve at the "self-intersection", when t=+/-1. This involved some work with geometry and with finding the slopes of the appropriate tangent lines. The best we could do is with some numerical approximations to the angles.

I asked people what the curve x=sin(t)+2cos(t), y=2cos(t)-sin(t), looked like. We decided it was bounded because sine and cosine are always between -1 and +1, so the points of the curve must be in the box -3 One way to answer this question is to just look at the graph and estimate the box, which would be, say, x approximately 2.2. Another way would be to try to use calculus. If dx/dt=0 then maybe we have a vertical tangent to the curve, which might give one side of the bounding box. So we looked for where dx/dt=0. This meant cos t-2sin t=0 or t=arctan(1/2) which is approximately .4636. When substituted into the equation for x, we get x approximately 2.23, just like the graph!

I remarked that for "well=behaved" parametric curves, vertical tangents occur when dx/dt=0, and horizontal tangents, when dy/dt=0.

Then I briefly discussed the arc length between (f(t),g(t)) and (f(t+delta t),g(t+delta t)). With the approximations f(t+delta t)=f(t)+f'(t)(delta t) and g(t+delta t)=g(t)+g'(t)(delta t) we saw that this distance (using the standard distance formula) was approximately sqrt(f'(t)^2+g'(t)^2) delta t. We can "add these up" and take a limit. The length of a parametric curve from t=START to t=END is STARTENDsqrt(f'(t)^2+g'(t)^2) dt. The integrand (the function integrated) is frequently called the speed: sqrt(f'(t)^2+g'(t)^2).

We applied this to find the length of the ellipse whose bounding box we investigated. This turned out to be what's called an elliptic integral. Elliptic integrals first occurred when people wanted to find the length of ellipses. The length can't be written in terms of "elementary" functions. The integrals also happened to arise in certain physical computations. Therefore these functions were analyzed, and, before computers, tables were constructed. Sigh. Yet another function (actually, another bunch of functions since it turns out there are several kinds of elliptic integrals).

The instructor began considering polar coordinates.

Go to the old dead tree at dawn. Then at sunrise, walk fifteen paces in a north-north-east direction from the tree. Dig there for treasure.

We tried to understand this, and decided that it represented locating a point in the plane with reference to a fixed origin, "the pole", and a fixed direction, the polar axis. Ther origin was the dead tree and the fixed direction was sunrise. Thus what is specified is the distance from the tree (r=10) and the angle from the sun's direction, assumed east (theta=Pi/2+Pi/8=(5Pi/8). Of course in the purported text I quoted from it would be forgotten that the sunrise's direction changed with the season, or that there was no treasure but a booby prize, or the steps were taken by a giant or a dwarf or ... many things.

I introduced polar coordinates and got the equations relating (x,y) to (r,theta). I looked at a point in rectangular coordinates and saw that there were infinitely many pairs of polar coordinates which described the position of this point (I gave examples). I remarked that in many applications and computer programs, there were restrictions on r (usually r is non-negative) or on theta (say, theta must be in the interval [0,2Pi) or in the interval (-Pi,Pi] or something) in an effort to restore uniqueness to the polar coordinate representations of a point's location. No such restrictions were suggested by the text used in this course.

Polar coordinates are useful for looking at things with circular symmetry. For example, the equation r=3 is much easier to contemplate than the equation x^2+y^2=9.

I gave equations relating polar coordinates to rectangular coordinates and vice versa. We will sketch some polar curves on Wednesday, when calculus in polar coordinates will also be skimmed. Sigh.

The instructor discussed one problem from workshop #5. Sigh.

Read the text on parametric curves and do problems, please. Read the text on polar coordinates and do problems, please. Also I will ask Mr. Scheinberg to answer questions about these four sections of the text.
Here is
information about the final exam. Please think about when we should have a review session or I should have office hours. I hope students will do problems from the last two final exams I gave in Math 192 on Monday, the last class.

Thursday, December 1

Mr. Scheinberg answered questions about the last three sections in the series chapter.

We began a discussion of parametric curves.

I first studied the "unit circle" (x=cos t, y=sin t) and discussed how this compared with x^2+y^2=1. There is more "dynamic" information in the parametric curve description, but there is also more difficulty: two equations rather than one.

Students sketched (x=cos t,y=cos t) and (x=t^2,y=t^2) and (x=t^3,y=t^3): all these are pieces of the line y=x: I tried to describe the additional dynamic information in the parametric definitions.

Then I tried to sketch the parametric curves (x=2 cos t, y = 3 sin t) and (x=1+sin t, y= cos t -3) which I got from another textbook. The first turned out to be an ellipse whose equation is (x/2)^2+(y/3)^2=1 and the second turned out to be a circle of radius 1 centered at (1,-3). These geometric curves intersect, but do they actually describe the motion of particles which have collisions? Well, one intersection is a collision and the other is not (this is generally not too easy to see!).

This is a picture of the kinetic aspect of the situation just described. This image is a quarter of a megabyte and may take a while to load!

I tried to describe the parametric description of a cycloid. This is done with a slightly different explanation in section 10.1 of the text.

Read the text on parametric curves and do problems, please. Also I will ask Mr. Scheinberg to be prepared to answer problems about parametric curves and polar coordinates. If you prepare for this by looking at these sections, maybe we can devote the last class meeting to
some review.

Wednesday, November 30

I began by discussing "solutions" to the first problem of the most recent workshop. The problem concerned two Fourier series. The first was SUMn=1infinitysin(nx)/2n and the second was SUMn=1infinitysin(n!x)/n2. The analysis of the convergence of both of these series is similar to that of problem 8 on the second exam. In both cases, the amplitude of sine (forgetting its input!) is between -1 and +1, so that the terms of both series in absolute value are dominated by the terms of a convergent series of constants (in one case, a geometric series of ratio 1/2<1, and in the other, a p-series with p=2>1). So since absolute convergence implies convergence both of these series converge for all x's. Also both of these series are periodic with period 2Pi (see the answer to the exam problem, again). In the workshop problem, specific partial sums were requested which were within .01 of the whole sum, and then I asked for graphs of these partial sums. I displayed using technology (!) pictures of these graphs, which probably were correct. Versions of these pictures are below. The specific partial sums given are not the only correct answers to the question. "Higher" partial sums are also correct. By the way, several students did the problem very nicely.

SUMn=18sin(nx)/2n SUMn=1100sin(n!x)/n2.
I then asked "What is going on?" Why is one sum so nice and smooth, and the other, so jagged? I assured students that the pictures showed these qualitative aspects correctly.

A long trip to explain the smooth picture
I then embarked on a (seeming!) digression to explain the first picture.

Ways of looking at the plane
As pairs of real numbers
As two-dimensional vectors
As complex numbers

Complex numbers
Discussion of complex number addition, multiplication, and even addition.
On the way, definitions given about the real part and imaginary part of a complex number, its complex conjugate, and the modulus of a complex number (distance to origin). The last generalizes the absolute value of a real number.
Very quickly: convergence of a complex sequence as determined by the modulus of the difference between the sequence elements and the limit getting (and staying!) small.

Euler's formula
Insert iy into the Taylor series centered at 0 for the exponential function. Notice the behavior of powers of iy, etc. Collect and gather terms. Obtain:

Euler tells me ...
eit=cos(t)+i sin(t) and cos(t)=[eit+e-it]/2 and sin(t)=[eit-e-it]/(2i)

Things we can learn from this

  1. Well, sinh and cosh, discussed earlier in the course, are really almost the same algebraically as sine and cosine, if you believe in Euler's formula.
  2. If you replace sin(nx) by [einx-e-inx]/(2i) in SUMn=1infinitysin(nx)/2n and then sum geometric series, you will get as the sum, after some complex algebra, the function 2sin(x)/(5-4cos(x)). A graph was given of this (thank you, Mr. Brophy) which looked like the picture to the right.
    The picture looks a bit different from the original one by Maple because generally pictures produced by that program are stretched to fill a square (look at the axes!). This can be deceptive (I have been deceived upon occasion!).
  3. We found an antiderivative of exsin(Ax) "as a physicist would do it". Earlier this semester we had computed this antiderivative using integration by parts twice. Here, we used Euler's formula to write the function as a sum of two exponentials, easily antidifferentiate, and then "recover" the real form of the answer using complex algebra.
There are other reasons you'll see to believe in Euler's formula. But I hope that what's above will begin to encourage you in this.

Engineers and sums
The derivative of SUMn=1infinitysin(nx)/2n should be SUMn=1infinityn cos(nx)/2n and that sum converges almost as fast (?) as the original sum. So the original sum should be differentiable (it wants to be) and this sum should be the derivative.
Now consider SUMn=1infinitysin(n!x)/n2. Here the amplitudes, sin(n!x), conceal a whole lot of wiggling. If the derivative existed, then, well, it should want to be SUMn=1infinityn! cos(n!x)/n2. But this series is way too divergent. I bet that this series doesn't converge for many x's and that the function represented by the original n! sum is not differentiable for many x's. That is indeed true. A verification is tedious and somewhat intricate and too difficult for me here. (By the way, it is differentiable at, say, some local max's and min's with derivative=0.) The picture looks very jagged. In fact, lots of motions in "real life" look like this. See here for a general discussion of Brownian motion, which is probably better described by statistical ideas than by traditional calculus. Brownian motion is an important physical model, and was treated by Einstein in one of his epochal papers written in 1905. See Einstein 1905: The Standard of Greatness by John S. Rigden (less than $15 on Amazon).

A horrible function
I devoted the remainder of my time to studying the following function:
    f(x)=e-1/x2 if x is not 0, and f(0)=0.
This turns out to be a very weird and very non-classical function. It would not be "believed" by mathematicians before the 20th century. We "graphed" y=f(x) first by thought: no values of the exponential function can be negative (or even 0). But why should f(0) be defined as 0? Well, if x-->0 in the formula e-1/x2 then the exponential function has as input a large negative number, which means its value will be very small positive. Therefore it is "natural" (if the defined function is to be continuous!) to define f(0) to be 0. If x is not 0, the function has values less than 1 because it has values of the exponential function for negative inputs. And when x gets large, the function gets close to 1, because exp's values for inputs which are small negative are very close to 1. Here is a picture of y=f(x) with its horizontal asymptote. Note, please, the axes: x is between -10 and 10, and y is more or less between 0 and 1.
The most interesting aspect of this function is its behavior near 0 which doesn't show up much in the picture above. So here is a picture with x between -.1 and .1, again distorted to fill a square. This function's graph is flat near 0. In fact, most people who think about this function state that it is infinitely flat near 0.

Differentiability of f(x)
I tried to argue that f(x) was differentiable, not only away from 0 where the derivative is given by a simple formula, but at 0, where we needed to use the definition of derivative and evaluate the resulting limit with L'Hopital's rule. Indeed, it turns out the any derivative of f(x) is given by something like this:
For x not 0, a formula of the form e-1/x2·[polynomial in (1/x)], for x=0, just 0.
I didn't verify this very well, but it is true.

So that ...
The Taylor series of f(x) centered at x=0 is the zero power series (all the coefficients are 0). f(x) is equal to its Taylor series only at x=0.

This would have made many classical mathematicians (and even physicists!) of the 19th century very upset. They wanted to believe that functions more or less were the same as their power series expansions whenever possible, because then everything would be computable, etc. Well, that belief is false. And it turns out that such functions as this f(x), which might have been spurned by all those people, are very useful in studying partial differential equations.

Word of the day: spurn

  1. reject with disdain; treat with contempt.
  2. repel or thrust back with one's foot.
(Why "with one's foot"? Weird.)

Please prepare problems from the last three sections on power series for Thursday.

Monday, November 28

I did a sequence of routine exercises using Taylor's Theorem.
  1. Describe how to get a rational number within 10-4 of e2.
  2. Describe how to get a rational number within 10-8 of e1/2.
  3. Describe how to find a polynomial within 10-4 of cos(x) on the interval [-2,2].
  4. How close does that polynomial get on the interval [-1/2,1/2].
  5. If ln(1+x)=x-x2/2+x3/3+Error, find an overestimate of the error on the interval [-1/2,1/2]. (This was more challenging and we had to think about a fraction, and estimate it by taking the largest of what could be on the top divided by the smallest of what could be on the bottom).
  6. Find the 10th Taylor polynomial and its error for (1+x)5. The error turned out to be 0 and we seem to have gotten the Binomial Theorem. We discussed binomial coefficients for a while.
  7. Find the 4th Taylor polynomial for the function (1+x)1/5 and estimate its error when x is in the interval [-1/2,1/2].
Then I discussed the generating function for Fibonacci numbers in a very abbreviated fashion. Please see this, which is, I hope, better.

Please prepare problems from the last three sections on power series for Thursday.

We reviewed for the exam. In particular, I attempted to show how comparison with geometric series would allow estimation of the sums of various series.

Sums of series
I used Taylor's theorem to show that the series mentioned above converge where appropriate with sums equal to the functions they represent. That is:
We know that f(x)=Tn(x)+Rn(x). When does Rn(x)-->0 as n-->infinity?

For any real number x, the remainders-->0 as n-->infinity in the series for ex and sin(x) and cos(x).

If -1<x<1, the remainders-->0 as n-->infinity in the geometric series, the series for log(1+x), and the series for arctan(x).

A power series centered at a is an infinite series, SUMn=0infinitycn(x-a)n. Most of the time (almost all of the time, actually) we will have a=0. In this sum, I will understand that the n=0 term when x=a, which symbolically is c0(a-a)0 or c000 , will be understood to mean just c0. So, for power series, 00 means 1.

Suppose we call S(x) the sum of the power series where the series converges: inside its interval of convergence. The series must converge absolutely at any point inside the interval of convergence which is not a boundary point (you can't be sure at boundary points). This is all proved by comparing the series to well-chosen geometric series, as I tried to indicate last time. Some other results follow in a similar way but the verifications, while not changing in nature, do get lengthier.

If S(x)=SUMn=0infinitycn(x-a)n and if r is the radius of convergence of the sum, then:

Let's see some examples of series manipulation which make use of these ideas. Please remember that these are examples made up to show how to use series. More "random" examples might not be as well-arranged for these techniques.

Example 0
If f(x)=SUMn=0infinitycn(x-a)n, then I found using repeated differentiation and substitution, that cn=f(n)(a)/n! and therefore the pwoer series must be the Taylor series of f(x) at x=a.

Example 1
Suppose f(x)=ex. If ex equals a power series centered at 0, it must be the series SUMn=0infinityxn/n! .

Example 2
Suppose f(x)=sin(x). If sin(x) equals a power series centered at 0, then it must be the Taylor series. But d/dx has the "closed loop" sin(x)-->cos(x)-->-sin(x)-->-cos(x) and back to sin(x). When evaluated at 0, this gives 0, 1, 0, -1, .... repeating. The power series must be SUMn=0infinity(-1)nx2n+1/(2n+1)!. This at least deserves some explanation. The -1's to various powers cause the terms to alternate. The 2n+1's cause only the odd terms to appear, since sin(0)=0 the even terms are there but their coefficients are 0.

Example 2a
If f(x)=cos(x) is equal to a power series centered at 0, it must be the series SUMn=0infinity(-1)nx2n/(2n)!.

Example 3
If f(x)=1/(1+x), then the power series must be SUMn=0infinity(-1)nxn. This is because we can think of 1/(1+x) as the sum of a geometric series whose first term is 1 and whose ratio is -x. Since there is only one power series centered at point attached to a function, this sereis must be the power series centered at 0.

Example 3a
What is the power series centered at 0 for ln(1+x)? Well, if we differentiate ln(1+x) we get 1/(1+x), and that has power series SUMn=0infinity(-1)nxn. So antidfferentiate the series, and get SUMn=0infinity(-1)nxn+1/(n+1)+C. But actually ln(1+x) at x=0 is ln(1+0)=0, so C is 0. And ln(1+x)'s power series centered at 0 is the series written with C=0. Notice that this series has radius of convergence 1. The function it is related to, ln(1+x), has rather horrible misbehavior at x=-1 (it tends to -infinity), so maybe that explains the lack of convergence of the series past the radius of 1.

Example 3b
arctan(x) is the integral of 1/(1+t2) from 0 to x. But (again, geometric series) 1/(1+t2)=SUMn=0infinity(-1)n(t2)n=SUMn=0infinity(-1)nt2n. Integrating and substituting, we get the series SUMn=0infinity(-1)nx2n+1/(2n+1) as the pwoer series for arctan(x) centered at 0. The radius of convergence of this series is again 1, and now there is no clearly discernible problem with the graph of arctan(x) near +1 or -1. The lecturer tried to confuse the class with the idea that there is some problem with arctan near +i and -i, complex numbers. Huh.

Application #1
Suppose f(x)=x3ex4. What is the 17th derivative of f at 0?

Application #2
Write 01cos(x3) dx as the sum of a series. How many terms are needed to get accuracy up to 10-10?

Application #3
Discussion of payoff, of average winning, of average entrance fee to a game. First for a game with only 3 options, then for a game of this type: toss a fair coin repeatedly until the first tail. Pay n dollars if the first tail is on the nth toss. Then what is the fair entrance fee (=average winning, =expection, etc.). This is essentially finding the value of SUMn=1infinityn/2n. We found this sum by taking the sum of 1+x+x2+x3+... (a geometric series), then taking d/dx, then multiplying by x, then setting x=1/2.

The word of the day.
carefree; unconcerned

We considered power series. We found various examples of boundary behavior by considering power series centered at 0 whose terms were xn/n! interval of convergence all real numbers
n!xn interval of convergence [0,0]
xn interval of convergence (-1,1)
nxn interval of convergence (-1,1)
(1/n2)xn interval of convergence [-1,1]
(1/n)xn interval of convergence [-1,1)

I did some work with estimate of a power series inside its radius of convergence by geometric series. The purpose of this was to show students that the sum of a geometric series would always be continuous inside its radius of convergence.

The Oxford English Dictionary is a massive project (you can't really call it a book any more) which is, essentially, a historical dictionary of the English language. Access to the OED is free through Rutgers. The OED declares that phlegm first appeared in 1387 in written English. It was then spelled fleem (really). The spelling wandered a great deal in the next two hundred years. The ph replaced f and the h and g somehome ... arrived. The result is a word which is spelled very different from its modern pronunciation, a very strange word. But written English is generally strange.

Please see this page about Taylor's Theorem.

Wednesday, November 9

Wachet auf, ruft uns die Stimme, BWV 645
"The first of the Schübler Chorales, so called because of the person who engraved them in copper, Georg Schübler. This is a transcription of the 4th movement of Cantata 140 for the 27th Sunday after Trinity (based on Matthew 25:1-13), which has the text "Zion hears the watchmen singing" (verse 2 in the lyric). Bach specifies: "Dextra 8 Fuss; Sinistra 8 Fuss; Pedal 16 Fuss." In other words, Right hand 8 foot (standard pitch), left hand 8 foot (standard pitch), Pedal 16 foot (an octave lower). Perhaps the reason for Bach's being so verbose is that these were actually published, a rarity in his lifetime." from

Geometric series?
I asked how to identify a geometric series, and we discussed that for a while.

Does SUMn=1infinityn2/3n converge? I looked at the ratios of successive terms, and, following a suggestion of Mr. Townley we looked at the infinite tail with n>=6. This allowed us, via a comparison test and then a sum of a geometric series, to decide that the series converged and even to get an overestimate of the sum. Indeed.

Statement of the Ratio Test
As in the text. For the p-series with p=2 (convergent!) and p=1/2 (divergent!) the Ratio Test returns the same result (the limit of the ratios is 1). So if the limit is 1, we can tell nothing!

We did a number of examples. I issued several warnings stating that it was easy to make algebraic mistakes.

One example I asked if SUMn=1infinity22n/[n!(n+1)!] converged. After many cautions about possible errors, we decided it did converge. By the way here is some Maple opinion about this series:

> sum(2^(2*n)/(n!*(n+1)!),n=0..infinity);
                               1/2 BesselI(1, 4)
> evalf(%);
The first answer is to show that, indeed, everyone knows this is a Bessel function. Indeed. The next answer asks for an approximate value.

Here is a heuristic idea both the Ratio and Root Tests. Here's an appropriate definition for heuristic:

First, heuristic (adjective) means
      1. allowing or assisting to discover.
      2. [Computing] proceeding to a solution by trial and error.

If a_n is approximately arn, that is like a geometric series, then the quotient a_{n+1}/a+n is approximately like (arn+1)/(arn)=r, and the approximation should get better as n-->infinity. This gives a background for the Ratio Test. For the Root Test, if we assume that a_n is approximately arn and take nth roots, then some algebra shows that (a_n)1/n is approximately a1/nr, and we know that lim_{n-->infinity}a1/n=1 if a>0, so we get the Root Test, similar to the Ratio Test but with (|a_n|)^(1/n) replacing |a_{n+1}/a_n|.

I did a sequence of wonderful problems, mostly from the book.

I looked at the power series SUMn=1infinityxn/n2. I asserted that, where this converged, it defined a function called the dilogarithm. Google gives about 43,600 pages for the the dilogarithm. One page is written by an undergraduate at the University of Texas. This young woman asserts that the dilogarithm is "a cool function".

We used the Ratio Test to conclude that the dilogarithm series converge absolutely when |x|<1 but diverged when |x|>1. If x=1, the series converges because it is a p-series with p=2>1. If x=-1 the series converges using the Alternating Series Test. So the series converges for all x in the interval [-1,1], which is called the interval of convergence for this series.

If |x|<=1/2, I split the series (for no good reason) into two pieces:


(a finite sum, just a polynomial!) (an infinite tail!)
I estimated the infinite tail:
Notice that the absolute values signs came "inside". This is a consequence of the carpenter's ruler reasoning.
Then if |x|<=1/2 and if n>=11, |x|n/n2<=(1/2)n/121. This is slightly subtle reasoning, so you should think about it a bit and question me if you don't get it. Then the whole infinite tail can be estimated:
|SUMn=11infinityxn/n2|<=SUMn=11infinity|x|n/n2<=(1/121)SUMn=11infinity(1/2)n. The last sum is a geometric series whose first term is (1/2)11 and whose ratio is 1/2. Therefore the whole mess is equal to (1/121)(1/2)11/[1-(1/2)]. Maple tells me that this is approximately .000008, rather small.

In fact, we can now "look" at the graph of the sum. Well, I will look at the graph of the (polynomial) partial sum. The picture included here is that graph in black. I also asked Maple to shift the graph up by .01 and draw it in green, and down by .01 and draw it in blue. Hey, the real graph differs from the curve in black by much less than .01, really at most .000008. So I believe that what I am seeing is essentially the graph of y=dilogarithm(x).

Tomorrow we will have fun with many problems from the textbook.

Monday, November 7

Two workshop problem presentations
Two significant problems were presented by Ms. Ho and Ms. Chow. These are problems 4 and 5 from the third workshop. Ms. Ho presented problem 4, which essentially asserted that
limt-->0121/(x2+t) dx=12limt-->0[1/(x2+t)] dx
That is, in this example, limit and integral can be interchanged.
Ms. Chow discussed problem 5, which showed that
limA-->infinity01A2(1-x)xAdx=1 but 01limA-->infinity[A2(1-x)xA]dx=0.
In this example, limit and integral cannot be interchanged!

These examples should convince you that life can be more complicated than first expected.

Example 1: harmonic series

Example 2: a geometric series

Logical confusion

Example 3: the alternating harmonic series

The Alternating Series test

Estimating the sum

The instructor shows a device with the help of "a student" (Mr. Townley)
The image shown is from where it is declared to be a "Royalty Free Photograph". I wanted students to know that
|SUMn=1infinityan|<= SUMn=1infinity|an}.

Some definitions
Absolute convergence.
Conditional convergence.

A theorem
Absolute convergence implies conditional convergence.

Whose converse is not true
The alternating harmonic series.

An example

Exam warning
On Tuesday, November 22, which has the Wednesday class schedule.

Please read on in Chapter 11. Here are problems from the 152 syllabus which I'd like you to look at before Thursday.
Comparison tests 11.4: 3, 4, 5, 6, 9, 17, 18, 20, 23, 26, 33, 36
Alternating series, Absolute convergence 11.5: 3, 6, 7, 12, 17, 22, 28, 31
11.6: 2, 3, 5, 6, 9, 17, 18, 23, 29, 30, 32
Ratio and root tests 11.7: 1, 4, 5, 13, 16, 20, 21, 33

Diary entry in progress! More to come

Thursday, November 3

The instructor told students they needed to be able to write fluently and persuasively. Then Mr. Scheinberg with the help of an assistant answered questions about 11.1 and 11.2 and 11.3.

We expanded the world by asserting that, in addition to demons and humans and angels, there were archangels. So:

Some typical inhabitants 37(ln(n))4

In this hierarchy, every function of a family moving right eventually grows faster and is larger than every function of a family to its left.

Diary entry in progress! More to come

Wednesday, November 2

L'Hop and others ...
We discussed solutions of the problems on L'H and other limit tricks. I wanted to confirm some ideas with you. For example, at this stage in your educational careers, I wanted to be sure you understood what you were doing, and not to manipulate without understanding. Therefore I wanted explanations, however minimal, for each problem. For example, if you just apply L'H whenever you want, then ... well maybe you would apply it in #2, where you would definitely get the wrong answer. More subtle difficulties arise if you insert "infinity" in functions, and think that you would get a nicely computable result. An even more subtle logical difficulty occurs when you take a true result such as:
If limx-->af(x) exists and limx-->ag(x) exists
then limx-->a(f(x)+g(x)) exists and equals the sum of the previous two limits.
The converse to this statement is not generally true. Use of it in solving problem 10 can lead to difficulties. I suggested that we change 10 slightly, and consider limx-->+infinity[x3/(x2-x]-[x3/(x2+x)] where the result will be different. You need to be careful!

Problem presentations
were made by Ms. Waters and Mr. LaBouff. I tried to advertise that their problems, although perhaps not as seemingly intricate as others, were important in applications.

Comparison Theorem
I stated the text's major comparison result and tried a few problems from 11.4. I stated that the major comparison ingredients were the p-series and geometric series. Also convergence propagates from bigger series to smaller series, while divergence propagates the other way. I tried to quantify the problems: if a series converges, how many terms are needed to approximate the sum within a certain tolerance? If a series diverges, and the series has positive terms, how can we make a partial sum bigger than some assigned number? The techniques are again to compare with a geometric series or an improper integral, and force the infinite tail of the series to be small or a partial sum to be large.

More tomorrow!

Monday, October 31

Repeating ...
The instructor wrote the definitions of A convergence fact
If a series converges, then its nth term must approach 0 as n-->infinity. This is because the nth term is the difference of the nth and (n+1)st partial sums (please see the text). Notice that the converse is false (the harmonic series shows this). Also note the contrapositive is true (and logically equivalent to the original statement: suppose an is the nth term of a series. If limn-->infinityan does not exist or if it exists and is not 0, then the series does not converge.

Logic and logical words
Statements of the form, "IF P THEN Q" are called implications. P is frequently called the hypotheses or antecedent, and Q is called the conclusion or consequent. Other related expressions are:
IF Q THEN P. This is called the converse.
IF {not Q} THEN {not P}. This is called the contrapositive.
IF (not P) THEN (not Q). This is called the inverse.

The truth value of an implication is always the same as its contrapositive. The truth value of the converse and the inverse are always the same. The truth value of an implication and its converse may be different.
The logical universe here is the real numbers.

The logic of series sometimes is rather "twisty" and it may sometimes be useful to consider the preceding examples!

General geometric series
I found a formula for a+ar+ar2+...arK, the Kth partial sum. I think it was (a-arK+1)/(1-r). Therefore if K-->infinity, and |r|<1, we get convergence if |r|<1. The sum is a/(1-r) if the first term is a.

Integral test
If f(x) is a positive decreasing function, then the series SUMn=1infinityf(n) and the improper integral anywhereinfinityf(x) dx both converge and diverge together.

Also, the verification of this statement provides a way of getting (imprecise!) numerical estimates of tails and partial sums.

  1. Example of convergence:
  2. Example of divergence:

General p-series: SUMn=1infinity1/np
Convergence for p>1, divergence otherwise.

Numerical examples

  1. Find a partial sum of SUMn=1infinity1/n4 which is within 10-20 of the sum of the whole series
  2. Find a partial sum of SUMn=1infinity1/n1/4 which is greater than 100.
Comparison techniques
We considered SUMn=1infinity1/[3n+sqrt(n)]. Does this series converge? I know it is less than SUMn=1infinity1/[sqrt(n)] but this is NOT HELPFUL. We need to get the series termwise less than a convergent series in order to know that it converges, or we need to get it termwise greater than a divergent series in order to know that it divergers. So we lost information by comparing with a series whose terms are 1/sqrt(n). But we could compare it with a series whose terms are 1/3n. This is a geometric series with ratio 1/3, which is certainly less than 1. So since our series is termwise less than a convergent series, it must also converge.

I also asked, what K should one take to be sure that the Kth partial sum, SUMn=1K1/[3n+sqrt(n)] is within, say, 10-5 of the whole sum? Well, the tail, what we're leaving out, is just SUMn=K+1infinity1/[3n+sqrt(n)]. This is termwise less than the series SUMn=K+1infinity1/3n, a convergent series whose sum is (a=1/3K+1 and r=1/3 so 1-r=2/3) a/(1-r)=(1/2)3-K. A little experimentation shows that this is occurs when K at least 10. Therefore I bet

> add(1/(3^n+sqrt(n)),n=1..10);
856507       1           1           1            1             1              1
------- + -------- + --------- + ---------- + ---------- + ----------- + -------------
3267876        1/2         1/2          1/2          1/2           1/2             1/2
          9 + 2      27 + 3      243 + 5      729 + 6      2187 + 7      6561 + 2 2

     + -------------
       59049 + 10

> evalf(%);
that, to five decimal places, the sum is .39899.
The 20th partial sum is actually 0.3990052397, so maybe we are correct. (So it isn't accurate to five decimal places, but it is accurate to 10-5!)

Diary entry in progress! More to come

Please read on in Chapter 11. Here are pieces of the 152 syllabus which I'd like you to look at, and I hope that Mr. Scheinberg will be able to answer questions about them on Thursday.

11.1: 2, 5, 6, 12, 13, 15, 18, 21, 26, 32, 34, 45, 46, 61,64
11.2: 11, 14, 17, 18, 21, 22, 27, 38, 41, 44, 49
Integral tests, estimates 11.3: 3, 7, 9, 13, 16, 21, 28, 31, 34

Rates of growth, in heaven, on earth, and below
Several students wrote to me more or less asking about rates of growth of functions. Here is an excerpt from my response, somewhat quirky, of course, but also serious.

I think intuition is extremely useful. It is also something that can be informed and improved. Most people who study the behavior of functions and how they grow probably have intuition sort of like this (and I'll begin with a silly metaphor first):

The world is made up of a hierarchy (spelling?) of demons and humans and angels. All the demons are less than the humans and all the humans are less than the angels. The "internal" arrangements of {demon|human|angel} society are quite complex, but between the societies things are rather simple.

Now onto functions and growth of functions, if you can stop giggling. Let's think about polynomials: x^2 and .002x^3 and -sqrt(5)x^9+98x^10. Polynomials are nice and I think maybe I can almost understand them. They are all a sum of monomials multiplied by constants. As x-->+infinity, what matters is the highest degree term with a positive coefficient, and what matters if two polynomials have a highest degree term of the same degree is what the coefficient is.

Now polynomials are human. What are angels? An angel is a sum of constants multiplying exponentials with constants. So angels are 2e^{3x} and 5e^{.0003x}+99e^{-99x}. These functions also have rates of growth as x-->+infinity, and we can compare two of them in a similar fashion, only here the comparison is first look for a positive coefficient multiplying an exponential with a positive growth number. So .99e^{.03x} is eventually bigger than 9999999999e^{.0003x}.

How are polynomials and exponentials related? Let me stick to things with positive coefficients. A very tiny exponential, say .00001e^{.00000000001x}, compared to a huge polynomial, say 10,000,000,000x^{100,000,000,000}, is bigger, as x-->infinity. Eventually, all angels outrank humans.

Now, continuing in our development of function growth via analogy and idiotic metaphor, let's consider polynomials of log functions: these are functions like 33(ln(x))^{30} and sums of them. Well, these are the demons. EVERY demon is eventually less than EVERY human.

Let me "compare" P(x)=33(ln(x))^{300} with, say, Q(x)=x^{.0001}. Poor Q(x) is a very weakly growing human, as x-->+infinity. And, wow, P(x) is rather a strong demon. Indeed, P(10) is about 1.5 times 10 to the 110th power, and Q(10) is about 1.00023. But let me investigate their "ultimate strength". The simplest way is to consider the limit of P(x)/Q(x) as x-->+infinity. Certainly this is a limit of the form infinity/infinity. so I should L'Hop the whole mess. If I do, the result seems to be:


I hope I did this correctly. Now let's do some algebra to this quotient. I will put all of the x powers downstairs, and push the constants out to the front. So the result is (I hope):

(ugly constant)
33(300)/.0001 multiplying  (ln(x))^{299} divided by x^{.0001}.

Essentially all I have done is lowered the degree of the demon by 1, and I still want the limit as x-->infinity. I hope you can convince yourself that eventually (after another 299 L'Hops?) that the limit will be 0. This "miserable" human (?) eventually defeats a very powerful demon. It may take a while but this really really happens. For example, I bet (I just experimented in another window of my computer!) that if x is greater than 10^8, P(x) is LESS THAN Q(x). If you object that 10^8 is large, my response will be that there is just as much "room" between 10^8 and infinity and there is between, say, 17 and infinity. And scale of action doesn't matter to demons and humans and angels, only what EVENTUALLY happens.

Sigh. I hope this does help you. It may be more than you want to know but it really is more or less the mathematical truth. The metaphor is just there to help. People who study theoretical computer science and algorithms really worry about the growth of functions. They have the families of functions we have just discussed (called Exp and Poly and Log) but also many others. Sigh. You can look at their zoo if you like, to check that I'm not kidding: the complexity zoo.

Thursday, October 27

Solving a differential equation
Suppose we consider the initial value problem:

Hes, we already know all about this, but frequently the more ways you know to analyze even simple situations, the better off you will be. So here I would like to compute the solution. If you really think about the word compute, well, the "ground floor" of computation is arithmetic. And the only functions I can compute with some assurance are, maybe, polynomials. So suppose that I assume this initial value problem has a polynomial solution:
Let us try to translate the information the differential equation and the initial condition give. So:
dy/dx=y means d/dx(A+Bx+Cx2+Dx3+Ex4+....)=A+Bx+Cx2+Dx3+Ex4+....
so if we "line up" the coefficients on both sides, we seem to get:     B=A from the constant terms
    2C=B from the x terms
    3D=C from the x2 terms
    4E=D from the x3 terms
    Etc. forever.
The y(0)=1 means that A=1.

Now feed A=1 through the previous collection of equations. We are forced to assitgn these values:
A=1     B=1     C=1/2     D=1/6     E=1/24    
Etc. Indeed, if you are a bit attentive, you can see that the coefficient of xn will be 1/n! (that's n factorial, again, the product of the integers from 1 up to n). I will "understand" that 0! will be 1, a special case of the notation.

(Uncomfortable!) Conclusion
The solution of the initial value problem

is the infinite degree polynomial 1+1x+(1/2)x2+(1/6)x2+(1/24)x3+...+(1/n!)xn+...
but I have a hard time understand how to compute such a thing. I certainly can't add up an infinite number of terms, because that is impossible (let's see, at one sum a second, adding up an infinite number of terms would take ... eternity ... although I like to think positive that is a bit daunting!).
So we will need to analyze this situation carefully. And we will.

Wonderful Mr. Scheinberg answered questions.

Definition time
An infinite series is a "formal" infinite sum. So, for example,

Diary entry in progress! More to come

Series Partial sum Convergence and sum of a series

If a series has non-negative terms, then ...

Diary entry in progress! More to come


Diary entry in progress! More to come

Please begin reading chapter 11. What we are doing is in 11.1 and 11.2. I will cover at least 11.3 and 11.4 this week. Do the assigned homework because this material is difficult and needs

Wednesday, October 26

Review of what we did before in a hurry

Another sandwich:

Limits depend only on tails

  • Silly example:
        xn=7Pi for 1<=n<=10,000
        xn=0 for 10,001<=n<=20,000
        xn=1+(1/n) for 20,001<=n
    Does this sequence converge? If it does, what is its limit?

    It certainly does, and its limit is 1.

  • Maybe a less silly example:
    Look at the beginning of this sequence which is defined by quite a simple formula (computations done by Maple):
    1.367879478, 0.6353353374, 0.3831204465, 0.2683156682, 0.2067379638, 
    0.1691454278, 0.1437690293, 0.1253354648, 0.1112345219, 0.1000454004
    It surely looks "good", controlled, maybe convergent, etc. Here is the 100th term:
    and the 1,000th term:
    and now the 1,000,000th term:
    0.1000000000 10
    This is a sort of tiny number. But here is the 100,000,000th term:
    0.5163291506 10
    which is quite a large number.
    What is going on? This is the formula I had Maple use:
    xn=(1/n)+e-n+.0000001n2. The ".0000001" in the exponent makes it clear that the "effect" of the n2 won't be "felt" for "a while". But an n2 in the exponent will dominate surely for large enough n, and the darn sequence will definitely explode: it will not converge.
    This tail gets too big.

    Interlude: geometry, axioms, logic, truth, beauty ...

    Diary entry in progress! More to come

    Calculators, graphs, etc., with rational numbers only
    The Intermediate Value Theorem fails: our eyes deceive us!

    Diary entry in progress! More to come

    The additional assumption

    Diary entry in progress! More to come

    An iteration
    Consider the function f(x)=sqrt(4x+73). This is a fairly simple function. We will use it to define a sequence recursively:
    Here are the first ten terms of the sequence:

    1.000000000, 8.774964387, 10.39710814, 10.70459867, 10.76189550, 
    10.77253833, 10.77451406, 10.77488080, 10.77494887, 10.77496151
    This sure looks (and smells?) like a convergent sequence. But you can't really tell. Only the infinite tail matters, and I can't directly "access" the infinite tail.
    Analysis of this sequence

    The harmonic numbers revisited
    The numbers, {Hn}, were previously defined. Here is another way to analyze them, by comparing them to areas under the curve y=1/x.

    Diary entry in progress! More to come

    Dichotomy of monotone sequences
    Our word, dichotomy, has as its first definition, a division into two, esp[ecially]. a sharply defined one.
    That's certainly what I want here. Remember that a monotone sequence is a sequence which is either increasing or decreasing. So the sequence {(-1)n} which alternates between +1 and -1 is not monotone.

        Either a monotone sequence is bounded and converges
        or it is unbounded and diverges.

    Diary entry in progress! More to come

    Monday, October 24

    Definition of sequence
    A sequence is a function whose domain is the positive integers (that's 1 and 2 and 3 and 4 and ...). It will sometimes be convenient to have sequences starting at 0, so their domain will be the non-negative integers (0 and 1 and 2 and 3 and 4 and 5 and ...).

    Example (trap rule approximations)
    Suppose I was (unfortunately) interested in computing something like 02cos(x7)/(1+x4) dx. No one knows an antiderivative of this function in terms of familiar functions. I could define a sequence in the following way:
    Tn is the result of using the trapezoid rule approximation for the function cos(x7)/(1+x4) on the interval [0,2], when the interval is divided into n equal parts.
    I certainly wouldn't want to compute this, but I (or my silicon pals) could do it, if necessary. In addition, due to the extravagant work we did earlier this semester, I know that if I had to, I could find a number Q so that (if the letter I represents the true value of the integral), then |Tn-I|<Q/n2. It wouldn't be fun to compute a value of Q but we could do it.
    I bet that the sequence {Tn} converges and that the limit of this sequence is the value of I.

    Example (decimal approximations to sqrt(2)
    12 is less than 2 and 22 is greater than 2. So I'll start with 1.
    1.42 is less than 2 and 1.52 is greater than 2. So my next term will be 1.4.
    1.412 is less than 2 and 1.422 is greater than 2. So my next term will be 1.41.
    1.4142 is less than 2 and 1.4152 is greater than 2. So my next term will be 1.414.
    1.41422 is less than 2 and 1.41432 is greater than 2. So my next term will be 1.4142.
    Etc. (Etc. here means, uhhh, it is probably difficult to describe the whole process exactly.) But I bet if qn is the nth term in this sequence, |qn-sqrt(2)|<10-n+1. (I think I wrote 10-n in class but isn't that wrong?)
    I bet that the sequence {qn} converges and that the limit of this sequence is sqrt(2).

    A sequence converges (roughly) if there is a number that it gets close to and stays close to. That number is called the limit of the sequence. A precise definition is near the bottom of p.703 of the text.

    Sequences don't have to converge
    Just look at {(-1)n} which flips back and forth from +1 to -1. It does not get close and stay close to any one number, so this sequence does not converge.

    Facts about limits of sequences
    There are a bunch of facts about limits and sequences which you should know. All of these facts can be proved, and none require any advanced techniques besides a great deal of concentration and patience. We usually verify these facts in Math 311 which some of you may want to take. But right now, I'd just like you to agree that they are probably true.

    Algebraic facts

  • Suppose that {an} converges and its limit is L. Then if c is a constant, the sequence {can converges and its limit is cL.
  • Suppose that {an} converges and its limit is L and that {bn} converges and its limit is M. Then the sequence {an+bn} converges and its limit is L+M.
  • Suppose that {an} converges and its limit is L and that {bn} converges and its limit is M. Then the sequence {an·bn} converges and its limit is L·M.
  • Suppose that {an} converges and its limit is L and that {bn} converges and its limit is M. Suppose also that M is not 0. Then for n large enough, bn is not 0, and the sequence {an/bn} converges and its limit is L/M.
    So the algebraic "stuff" works the way it should.

    Order facts

  • Suppose that {an} converges and its limit is L and that {bn} converges and its limit is M. If we know that for large enough n, an<=bn, then L<=M.
  • Suppose that {an} converges and its limit is L and that {bn} converges and its limit is M. If we know that L<M, then for large enough n, an<bn.
    Comment This is a little bit "ticklish". It is not exactly a converse of the previous statement, because the precise converse would not be true. The sequence {1/n} converges to 0 and the sequence {-1/n} converges to 0, but although 0<=0, it is not true that 1/n is <= -1/n.
  • Sandwich or squeeze result If {an} and {bn}and {cn} are three sequences, and I know that an<=bn<=cn for all n, and I know that {an} and {cn} both converge to the same limit, then the middle sequence, {bn} also converges to that limit.

    Warning! Subtlety coming!!!
    There's one more limit fact that I will recite next time which is somewhat more subtle than all of these.

    Sequences can be defined by formulas
    This is probably the most familiar way.

    Suppose r is between 0 and 1. Then {rn converges, and the limit of this sequence is 0.
    Why? Well, rn=en ln(r). Now ln(r) is a negative number since r is between 0 and 1. Multiplying this negative number by n as n-->infinity makes it more negative. In fact, n ln(r)-->-infinity. But then en ln(r)-->e-infinity and that's 0 (if you have a picture of y=ex in your head!).

    Suppose a is a positive number. Then the sequence {a1/n} converges, and its limit is always 1.
    Why? Well, a1/n=e(1/n)ln(a) and I know that (1/n)ln(a)-->0 as n-->infinity. So a1/n-->e0=1 as n-->infinity.
    Comment I really don't think this is totally obvious. Here are the 1/nth powers of 2 as n goes from 1 to 10:

    2.000000000, 1.414213562, 1.259921050, 1.189207115, 1.148698355, 
    1.122462048, 1.104089514, 1.090507733, 1.080059739, 1.071773463
    These numbers go down to 1.
    I really don't think this is totally obvious. Here are the 1/nth powers of 1/3 as n goes from 1 to 10:
    0.3333333333, 0.5773502692, 0.6933612743, 0.7598356856, 0.8027415617,
    0.8326831776, 0.8547513999, 0.8716855429, 0.8850881521, 0.8959584598
    These numbers go up to 1.

    Huh: the n's in the base grow and the 1/nth powers push things down. I don't think the "winner" is clear, even though most students did. Here are the first 20 terms:

    1.000000000, 1.414213562, 1.442249570, 1.414213562, 1.379729661, 
    1.348006155, 1.320469248, 1.296839555, 1.276518007, 1.258925412, 
    1.243575228, 1.230075506, 1.218114044, 1.207442027, 1.197860058, 
    1.189207115, 1.181352075, 1.174187253, 1.167623484, 1.161586350
    and maybe even now this isn't totally "clear". But, in fact, {n1/n} does converge, and its limit is 1.
    Why? Well, n1/n=eln(n)/n, and from our extended holidy with l'H we know that:
    ln(n)/n, as n-->infinity, is of the form infinity/infinity, so that it is eligible for l'H. Thus we must study, instead of ln(n)/n, the quotient (1/n)/1, and this limit is surely 0. Therefore ln(n)/n-->0 and eln(n)/n-->e0=1. Indeed.
    By the way, 10001/1000 is approximately 1.006931669.

    I asked if this sequence converged. Here's some numerical evidence, the first 10 terms:

    10.00000000, 7.615773106, 7.179054352, 7.058305379, 7.020125508, 
    7.007210537, 7.002652582, 7.000995354, 7.000379289, 7.000146315
    So I guess that this sequence converges and the limit is 7. Well, that's true. Why? I will use the Squeeze result carefully. Certainly
    That means
    But 21/n-->1 as n-->infinity, so that (3n+7n)1/n is sandwiched between two sequences which have limit=7, and therefore it must also converge and have limit=7.

    Sequences can be defined recursively
    Many sequences of great importance in science and engineering are defined recursively. This means that terms of the sequence are defined interms of previous terms of the sequence. Let me discuss some examples.

    Sqrt(2) by Newton's method
    Newton's method is a way of replacing a guess at a solution of f(x)=0 by an improved guess, obtained by "sliding down" a tangent line. It is discussed in section 4.9 of the textbook. Here let me show a way to approximate sqrt(2). I'll take f(x)=x2-2. Then sqrt(2) is the postive root of this function. The iteration or recursion step describes how to go from a guess, xn, to a new guess, xn+1. Let me tell you what the coordinates of the points in the picture to the right are. First, A is the point (sqrt(2),0). C has coordinates (xn,0) (the old guess, which is to be improved). D is the point (xn,xn2-2) on the curve y=f(x). The line tangent to the curve will have slope f'(xn)=2xn. Therefore, since the slope represents the tangent of the angle CBD, it must be the ratio of the geoemetric lengths DC/BC. But DC is xn2-2, and so 2xn=(xn2-2)/BC, and the length of BC is (xn2-2)/(2xn). The coordinates of the point B are supposed to be (xn+1,0), and xn+1 is the improved guess. So xn+1 will be xn with the length of BC subtracted from xn: the guess moves backwards. Therefore
    The reason I'm going through this is that this specific formula simplifies in a remarkable way.

         x n2-2     2xn2-(2xn2-2)    2xn2+2    1 (   2 )
    xn- -------- = ------------- = -------- = - (xn+--)
          2xn            2xn          2xn      2 (  xn)
    So we replace xn by the average of xn and 2/xn.
    I'll define a sequence by
    Uhhhh .... Maple calculated the first few elements of this sequence:
    You should understand that the values quoted are exact values. A decimal approximation of the last number is 1.414213562, which happens to agree with Maple's 10 digit approximation to sqrt(2).

    There is evidence that this averaging idea was known to "ancient civilizations" (Egypt, India) and was used to improve approximations to square roots. But I don't think that a non-recursive (formula!) method for defining the sequence is known.

    Look at the recursion:
    If xn--> a limit, L, then surely xn+1--> the same limit, L, because xn+1 is just the same sequence, moved one step further on. But then the recursion xn+1=(1/2)(xn+2/xn) becomes )(as n-->infinity) L=(1/2)(L+2/L) and this is 2L=L+2/L which is L=2/L which is L2=2 which is L=+/-sqrt(2). In fact, the limit should be +sqrt(2) since we started at x1=1>0, and the averaging process keeps all terms positive.

    Therefore ...
    Have we proved that the sequence converges to sqrt(2)? Actually, no. We showed (the logic is important!) that if the sequence converges, then its limit is sqrt(2). Theory and practical application are full of examples where the "if" part is missing or untrue. Students should recognize this.

    Another recursive sequence
    We could define the sequence by
    Well, since x1=1, x2=(2)(1)=2, x3=3(2)=6, x4=4(6)=24, etc. In fact, xn=n!, but the notation is really a description, not a formula. I don't know any nice "formula" for n! (except for itself).

    Yet another recursive sequence
    Here the sequence looks like 1, 1+2=3, 1+2+3=6, 1+2+3+4=10, etc. In this case, a formula can be guessed. It seems that xn should be (n{n+1})/2. I will call yn=(n{n+1})/2. If you plug in n+1 and n=2 and n=3 and n=4, you'll get the numbers y1=1 and y2=3 and y3=6 and y4=10. But why should x10,000 be equal to y10,000? Why should all of the infinitely many equations xn=yn be true?

    A proof strategy

    We could think of a very long row of dominos standing on their narrow end. The dominos are close enough together so that if one falls, the next one falls over. I bet that if the first one is pushed then they will all fall over.
    the first one is pushed ...

    This is just the observation that x1=1 and y1=1.
    if one falls, the next one falls over.
    Suppose xn=yn. Then yn+1=(using the formula!) ([n+1]{n+1 +1})/2=([n+1]{n+2})/2= ([n+1]{n}+[n+1]2)/2=([n+1]{n})/2 + ([n+1]2)/2= yn+[n+1]. But we assumed that xn=yn so this means yn+1=xn+[n+1]=xn+1.
    The domino theory was a part of U.S. foreign policy for a long time (it still might be). Here, my dominos represent a proof technique called mathematical induction. I just used the technique to try to convince you that for all n, xn=yn. I hope that you appreciate the almost paradoxical strength of this: a finite amount of writing has proved an infinite number of statements!

    Much of this sort of verification can now be done by computer. One of the world leaders in this endeavor is Professor Doron Zeilberger, a faculty member at Rutgers. He is very approachable, and he is really smart.

    The harmonic numbers
    Here is a sequence which occurs in many applications. The nth harmonic number is the sum 1+1/2+1/3+1/4+...+1/n. This can be abbreviated by SUMk=1n1/k. Here k is called the index of summation, and I can't see k (logically!) outside of the SUM sign. It is logically just as inaccessible as the integration variable in 1x1/t dt=1x1/q dq=1x1/w dw.
    Maybe you might find the harmonic numbers more appealing if I wrote the definition recursively:

    The text Concrete Mathematics: A Foundation for Computer Science by Graham, Knuth, and Patashnik shows that the harmonic numbers are related to stacking books over an edge. Please see the pictures displayed here and a more complete explanation, while available in the text cited, is also here.

    What can one say about the harmonic numbers? Do they converge? Here are the first 20:

    1.000000000, 1.500000000, 1.833333333, 2.083333333, 2.283333333, 
    2.450000000, 2.592857143, 2.717857143, 2.828968254, 2.928968254, 
    3.019877345, 3.103210678, 3.180133755, 3.251562326, 3.318228993, 
    3.380728993, 3.439552522, 3.495108078, 3.547739657, 3.597739657
    I can't see anything clearly from this. But look at H8:
    Break up into pieces:
    1  +1/2   +1/3+1/4   +1/5+1/6+1/7+1/8
    Underestimate each piece:
    1  +1/2   +1/4+1/4   +1/8+1/8+1/8+1/8
    1  +1/2   2(1/4)=1/2   4(1/8)=1/2
    So that H8>=1+3(1/2). I hope this persuades you (a proof can be done by mathematical induction, or I'll show you another way next time) that H2m>=1+m(1/2). First this shows that the sequence of harmonic numbers grows without any upper bound so that it can't converge. And, wow, this estimate grows very slowly. For example, to be sure that a harmonic number is greater than 100 using this estimate, I would need to look at H2198: that is, I'd need to add up about 1060 terms! That's a lot of terms.

    I will investigate this more Wednesday.

    Thursday, October 20

    { Perpendicular | Orthogonal | Normal }
    In many areas of math and science and engineering, these words are intended to mean the same thing.


  • What is an equation for a line perpendicular to y=5x+7 through the point (2,1)?
  • We need a slope and a point. We have the latter. The slope should be the negative reciprocal of the slope of the line given: that is, -1/5. So an equation for the line is (y-1)=-(1/5)(x-2).

    Two curves intersect orthogonally at a point if their tangent lines at that point are perpendicular.

  • Consider the curve y=1-x2, and other parabolas y=C(1-x2) with C some negative number. Displayed to the right is the situation, with C=-2, -1, and -1/2. What value of C will make the curve y=1-x2 and y=C(1-x2)
  • The curves intersect at x=+/-1. At +/-1, if y1=1-x2, y1´=-2x, so y1´=-/+2. But if y2=C(1-x2), then y2´=-2Cx, so y2´=-/+2C. The product of the slopes is 4C. This is -1 if C=-1/4. The winning curve is displayed to the right.
  • Two families of curves are orthogonal if every member of one family is orthogonal to every member of the other family.

    This may be pretty but should any engineer care about it?

    A very dilute bit of physics: heating a thin metal plate
    Orthogonal families of curves occur in many applications. Perhaps the simplest occurence to explain is the temperature of ideal steady state heat distributions on thin homogeneous plates. So think of a thin plate with some heat distribution on the edge. The heat is supplied in such a way that if we measure the temperature distribution at any point inside the plate, that temperature is always the same. This is what's called a steady-state temperature distribution.
    The blue stuff is supposed to be ice cubes, and cold. The red stuff is supposed to be flames, and hot.
    This is romantic art in support of mathematics instruction. I hope you appreciate it.

    Suppose I give you a heat distribution on the edge of the plate, as shown. The red stuff is supposed to be flames, and the blue stuff is supposed to be ice cubes. If I let time pass and keep the ice cubes and flames on the edge, maybe you can see that eventually the temperature distribution inside the plate will stabilize, and we will get a steady state temperature distribution. Maybe you can see the heat flow curves ("flux") and the isothermals. It is not obvious that these families of curves are orthogonal, but this actually is true!
    Families of orthogonal curves also arise in other applications, prominently in, say, electricity and magnetism.

    An example

  • Consider the family of parabolas, y=Cx2, for all possible C's. This is a collection of parabolas going through the origin. Can we find the family of curves which are orthogonal to these curves?
  • Well, C=y/x2 also describes these curves. The reason for isolating the C is because I want to differentiate, and the C will go away. I'd like to get some differential equation describing this family of curves. Let me d/dx the equation C=y/x2. The result is: 0 (of course!) = stuff from the quotient rule. What is the "stuff from the quotient rule"? It is:
    This should be 0, so (if we multiply by x4), we get dy/dx(x2)-2xy=0, and then dy/dx=2y/x. Now if we want curves which are orthogonal to these curves, we need dy/dx to be the negative reciprocal of this. The orthogonal curves must satisfy the differential equation dy/dx=-x/2y.
    This is a separable equation, and not so hard. Therefore we can solve it:
      Separate: dy/dx=-x/2y gives 2y dy=-x dx
      Integrate: which gives 2y dy=-x dx
    And we know that the equation (1/2)x2+y2=Constant describes a family of curves orthogonal to the original parabolas. These new curves are a collection of ellipses with center at the origin and with major axis on the x-axis and minor axis, 1/sqrt(2) as long, on the y-axis.
    Some of the original parabolas are shown in the accompanying picture, along with some of the ellipses of the family of curves orthogonal to them.

    Another way to get the differential equation of the family of parabolas
    If y=Cx2, then dy/dx=2Cx. But C=y/x2, so that dy/dx=2Cx=2[y/x2]x and therefore dy/dx=2y/x, just as we had above.

    And we ended by ...
    breaking up into small groups and working on the limit problems. It was fun!!!

    I'd like to make sure that we are all up to speed on methods of evaluating limits, because this will be very useful in the remaining part of the course. So please hand in
    these problems on Monday.
    Here are pieces of the 152 syllabus which I'd like you to look at, and I hope that Mr. Scheinberg will be able to answer questions about them on Thursday.

    Arc length,
    surface area
    8.1: 3, 8, 11, 34
    8.2: 1, 4, 5, 6, 14, 31
    Differential equations,
    direction fields
    9.1: 1, 3, 4, 6, 9, 10
    9.2: 1, 3, 4, 5, 6, 9, 11
    Separable equations;
    exponential growth
    9.3: 1, 4, 19, 20, 21, 37, 39
    9.4: 3, 4, 5, 9, 10, 14

    Wednesday, October 19

    I reviewed where we were: discussing solving differential equations. I mentioned the examples we had: the vibrating spring, exponential growth and decay, and separable differential equations. Each of these examples had families of solutions, and a function in the family of solutions was specified by some constants that I referred to as initial conditions. I asserted that everything is included in the following wonderful

    Basic Theorem about solutions of differential equations
    If f(x,y) is a differentiable function of x and y and if (x0,y0) is in the domain of f(x,y), then the initial value problem y´=f(x,y) satisfying y(x0)=y0 has exactly one solution.

    Beloved question BC4 (free response question #4 on the 2005 AP calculus BC exam) dealt with f(x,y)=2x-y and asked questions about the slope field and Euler's method.
    When I first learned of the Basic Theorem, as I call it above, I thought that it sort of handled all possible difficulties related to differential equations. I couldn't really understand the complexities of trying to actually use mathematics in real applications. The proof of this result could probably be explained to you by the end of this course. For example, one way to get the solution mentioned is by using Euler's method (that's a rather slow way, but it works). What kinds of reasonable questions are not answered by this theorem? Let me show you.

    Example 1 Suppose I know that y´(x)=e(x2). Well, here is the solution which is guaranteed by the theorem:
    F(x)=0xe(w2)dw+C. The Fundamental Theorem of Calculus says that F'(x)=e(x2) and the "+C" allows adjustment for an initial condition. In fact, we've just rewritten the differential equation and transferred the computational "responsibility" to the definite integral. As I mentioned back when we were first starting methods of antidifferentiation, there's no way to find an explicit antiderivative of this function in terms of standard functions. Evaluation must be done through an approximation technique. So learning about the solution (that it exists, that there is exactly one) has not helped at all.

    Example 2 I think this equation has more subtle aspects. Let's look at y´(x)=xy2. This equation looks "easy". It is certainly separable, and the f(x,y) seems rather simple. We separate variables and integrate: so dy/y2=x dx and then -1/y=(1/2)x2+C and we can even solve for y in terms of x: y=1/(C-{1/2}x2). (I renamed C.) Now suppose we want the solution to satisfy the initial condition: when x=0, y=8. Then C should be 1/8, and the solution looks like y=1/({1/8}-{1/2}x2). This is o.k., a nice function. A picture is to the right (with units on the horizontal and vertical axes are rather different). We get into some trouble when we think about the solution curve. Suppose the curve represents the growth of "something" (I think I suggested "wahoonies" in class). Usually x represents the "independent" variable, say, time. What happens to the growth of the wahoonies? Look at {1/8}-{1/2}x2 from x=0 (where it is 1/8) up to ... well, this should not be 0 because it is in the denominator. It will be 0 when x=1/2 (also, of course, -1/2). The growth gets larger and large, and, finally, explodes (!?) at 1/2 (also, of course, backwards at -1/2). We can't predict the population of wahoonies on further than 1/2 or further backwards than -1/2. This disaster is, to me, unforseen in the nice function xy2. Things are actually worse than this.
    If we change the initial condition to, say, (0,1/80,000), then the unique solution is y=1/({1/80,000}-{1/2}x2) and the domain of this function is only (-1/200,+1/200): the wahoonie explosion is even closer in time. Another picture is shown, with even more distorted axes, but the idea is, I hope, clear: a narrower steeper curve with a higher up initial condition. As the initial number of wahoonies grows, the explosion comes closer and closer in time.

    How to "solve" ODE's
    What these two examples, and others, can tell you about the famous Existence and Uniqueness Theorem for ODE's (a version is quoted above as the "Basic Theorem") is not that the theorem is wrong or a fraud, but more that it declares only what is there. I know that I've had a tendency to "read into" the theorem a heck of a lot more than it contains. The two examples I showed are an effort to convince me (again!) and maybe you that solutions may not be effectively computable, and that the solutions are guaranteed only to "live" for a short time, and more than that cannot be assumed.

    Solving ODE's can occur on various levels.

    1. We could ask for solutions which are nice formulas written in terms of familiar functions. Let me call this the algebraic level.
    2. We could ask for numerical approximations which are guaranteed to be close to the real solutions. This is certainly possible, but more difficult than one might initially imagine: how could we numerically analyze the explosion in the wahoonies? How does one numerically analyze the behavior of space/time near black holes? I'll call this the numerical level.
    3. We could ask for asymptotic information. Such things might include ideas about rates of growth, limit behavior, and dependence on initial conditions ("chaotic behavior"). I'll call this the qualitative level.
    People interested in differential equations usually want to exploit all of these methods to get information and it's not a good idea to neglect any of them. The qualitative level may be a bit new to you, so let me contrast it with the algebraic level in one example (a logistic equation). I'll totally neglect the numerical level in this course, but I know that you've seen Euler's method, which is a rather slow numerical approximation technique.

    Exact solution of a logistic equation
    Here I'll look at y'(t)=y(1-y). We're supposed to think that the rate of change is directly proportional to the population y(t) at time time, but is also limited by the amount of resources available. Here the resource limit (carrying capacity?) is 1. Of course I have made the numbers simple so that someone essentially even more simple can analyze the equation easily. To solve this equation (in the sense of the first level above) I recognize it as a separable equation and write

    ------ = dx
    and then (partial fractions!) recognize that 1/[y(1-y)]={1/y}+{1/(1-y)} where the advantage is that I can antidifferentiate the pieces. So ln(y)-ln(1-y)=x+C (I forgot the minus sign one when I was analyzing this in preparation for the lecture!) so that ln(y/(1-y))=x+C and (renaming K=eC) y/(1-y)=Ket so that y=(1-y)Ket=Ket-Ket and y+Ket=Ket and (1+Ket)y=Ket. Finally,

    Well, how can we use this formula? I can try to match an initial condition, and then I can look at a solution curve. In class I used (0,3/4) (so K was "clearly" 3) but let me be more adventurous here. When t=0 in Ket/(1+Ket) we get K/(1+K). If we want K/(1+K) to be .37, and approximate value of K is .587 (yes, a silicon pal helped me here). And my same pal graphed the solution curve shown to the right. It sure looks like the curve is an increasing function of time, and to the right it approaches +1 and to the left it approaches 0. You can check these limits using the algebraic formula:
    As t-->+infinity, the quotient is always less than 1 (the bottom is larger than the top!) but the 1 becomes less and less significant, because ex-->infinity. If you would like a bit more algebra, consider that (factor out the exp!) .587et/(1+.587et)=1/([1/.587]e-t+1) as e-t-->0 as t-->+infinity.
    Similarly, et-->0 as t-->-infinity, so that y-->0 then.

    One problem is that in real applications, we seldom know exact initial conditions, and one thing that differential equations "teach" is there may be what's called sensitivity to initial conditions. What happens if I change the initial condition from .37 to .38? How certain is it that y(t) will still-->1 as t-->infinity? A qualitative study may give such information easily.

    Slope field analysis of a logistic equation
    Look at the differential equation: y'(t)=y(1-y). If a solution curve of this equation passes through the point (2,3), then the slope of the curve must be (3)(-2)=-6. The curve must be tangent to a line of slope -6 passing through (2,3). We could therefore {think|hope} that the curve will look a bit like that line near (2,3): we can draw a short line segment of slope -6 through the point (2,3). We can try to draw lots of these little line segments. This is called a direction field (textbook) or a slope field (what I will call it here). Maple's command, dfieldplot (included in the package DETools) produced the collection of green arrows here. I am not too familiar with the plethora (!) of options of dfieldplot, so I took the default settings, which produce "line field" elements with arrow heads. I superimposed the solution curve we know already. I hope you can see that it is tangent to the slope field elements shown.
    Here's a picture with more of a qualitative point of view. I drew the slope field at integers and half-integers froom -3 to -3 in both the horizontal and vertical directions. Then I tried to understand and sketch what would happen with various initial conditions at time 0. There are two special solution curves which happen when y(1-y)=0. The two constants identified, 0 and 1, are graphs of horizontal lines which satisfy the differential equation. These constants make the right-hand signs equal to 0, and, since the functions are constants, the derivatives of the functions are 0. These are called equilibrium solutions of the differential equation.
    The initial conditions
    The initial conditions in interval A on the y-axis are in magenta. Forward evolution pulls the corresponding solution curves all towards the equilibrium condition y=1. Backwards, they seem to approach y=0. The initial conditions in region B seem to evolve as t-->+infinity towards again the equilibrium solution y=1. As time goes backwards, these solutions explode out (the wahoonies are exploding again). The solution y=1 is a special kind of equilibrium, called a stable equilibrium. Small disturbances in initial conditions near this equilibrium lead to solutions which all approach it as t-->+infinity. (I tried this color but it was too darn hard to read!)Here in region C the instability of the equilibrium solution y=0 is shown. If you do perturb the solution up or down, eventually solution curves push away from the equilibrium. Region C's initial conditions lead to negative infinity type of explosions, as shown.

    Comment One thing which makes drawing the slope field and undertanding it easier is that the differential equation y'(t)=y(1-y) is autonomous: there is no mention of the independent variable, t, on the right-hand side. The word autonomous has the following (perhaps more common) dictionary meanings:

    I think this is very nice, but of course one loses some precision with the qualitative point of view. For example, a solution curve between y=0 is concave up for a while and then is concave down.It has one inflection point. The algebraic solution allows (in principle!) finding the inflection point exactly. The geometry justs sorts of waves at it. But maybe that's enough?

    How are human brains built? (Maybe -- a limited comment!)
    I've been told that the human brain has a terrific amount of its capacity directed towards interpretation of visual data. Certainly I find the slope field pictures much more convincing than looking at the formula of the algebraic solution. And more numbers would not necessarily help that much: I find a bunch of numbers difficult to interpret. So that's why people like visual displays of information.

    And another automomous equation
    I think I looked at something like y´(t)=(y-6)4(y+5)7. Certainly I don't think I could explicitly solve and get an algebra solution. But I could easily (well, almost easily!) sketch a slope field for this equation. It is also autonomous so I can move the slope field elements left and right after I sketch them on one vertical line. The lines y=-5 and y=6 represent equilibrium solutions. y=-5 is an unstable equilibrium. Perturbations of it move up to 6 or out to -infinity. y=6 is a more complicated situation.

    Limit manipulations
    I'd like students to do these problems and hand them in on Monday. That way I will have a firmer expectation that we'll be able to cope with future limit problems (which will occur!) in the course.

    Return of exam
    I returned the exam and asserted that everyone in the class should stay in the class. Further remarks will be made on Thursday.

    Thursday, October 13

    Valiant Mr. Scheinberg answered questions about the remainder of the sections of chapter 7.

    Another differential equation
    Here's a simpler differential equation than the one we looked at last time: y'(t)=Ky(t), where K is some fixed constant.

    What it might mean

    What are the solutions

    All the solutions?

    Another way to solve it

    Separable equations

    A modification of growth, with a carrying capacity
    The logistic equation

    Diary entry in progress! More to come

    Wednesday, October 12

    Galileo is reported to have declared, "Nature's great book is written in mathematics." If that is so, then ever since Newton's efforts to mathematicize physics, the chief mathematical dialect in which this book is written is the language of differential equations.

    An initial situation is specified mathematically, and then "things" evolve or change according to some well-specified "laws" relating their interaction. Efforts of scientists and engineers were based upon this approach for several centuries. Although the approach is not sufficient to handle everything, and sometimes is not easy, it had major successes in both theoretical and practical aspects of science and engineering. In today's discussion, we will look at a very idealized model of a physical situation which might make subsequent consideration of theory easier. We will consider an ideal vibrating spring with no damping. This isn't the simplest differential equation. Some important differential equations are simpler -- certainly the equations describing the motion of a rock dropping under gravity, widely considered in early study of calculus, are easier. But what happens in this example is much more characteristic of how differential equations are used than most simpler examples.

    What we need to know
    We need F=ma: the force on an object is directly proportional to the rate of change of the rate of change of the position of the object: the acceleration. The constant of proportionality is called mass. We will discuss the an ideal spring, without damping or dissipation of energy. One should think that the spring is floating in space, in fact, it is alone in the universe. There's no gravity, no air resistance, no ... anyway, the spring has a mass attached to it. If we attached the mass very gently and at the correct length (in equilibrium) the spring would not appear to move. However if we push or pull the mass, or attach the mass at a position other than equilibrium, a force seems to act on the mass. Hooke's law states that the spring exerts a force on the object whose direction is opposite to the object's displacement from equilibrium and whose magnitude is directly proportional to the magnitude of the displacement. So if x(t) is the displacement from equilibrium at time t, the spring exerts a force of -kx(t) on the object. Since also F=ma, and a is x´´(t), the acceleration, we have the law of motion for an undamped spring: mx´´(t)=-kx(t). By the way, Hooke's law has been experimentally verified under many conditions, as look at the spring is not stretched or squeezed too much (don't take a rubber band and stretch it 20 feet, for example).

    The double game
    I will try to play two sorts of intellectual game: I will try to use physical "intuition" to determine what to expect about spring motion, and I will also investigate what purely mathematical deductions can be made. Well, first I want to make my life a little bit easier. I'll assume that my "units" are chosen so that m=1 and k=1. Remarks at the end will cover what happens if we don't assume this. Now I want to solve the equation mx´´(t)=-kx(t). Well, what does "solve" the equation mean? I know if I study the polynomial 2B3-4B-8=0 that a solution is B=2. When I substitute 2 into the polynomial the equation becomes true. Here I have what's called a second-order (the unknown function appears with a maximum of two derivatives) ordinary differential equation. The word "ordinary" is sometimes used, because there are other sorts of differential equations, such as partial differential equations. A solution would be a function which could be substituted into the equation, and for which the equation would be true for all appropriate values of t (say, t in the domain of x(t), for example). We could try some x(t)'s. For example, functions like x(t)=-16t2+9t-5 occur when we drop rocks. Then x´´(x)=-32, and there are very few t's for which -32=-(-16t2+9t-5). So this x(t) is not a solution. But we should try to use our physical intuition. The motion of a spring should be back and forth, so certainly likely candidates should be bounded. But, in fact, the function -16t2+9t-5 gets arbitrarily large in magnitude. What should we try? Well, sin(t) was suggested, and that works: if x(t)=sin(t), then x´´(t)=-sin(t)=-x(t). And so does cos(t). Wait: so does cos(t)+sin(t). Indeed, lots of other suggestions work: x(t)=Acos(t)+Bsin(t) works if A and B are any constants. The equation x´´(t)=-x(t) is structurally rather nice. The sum of two solutions is a solution, and a constant multiplying a solution is a solution. (This is called, in mathspeak, linearity, and in engineeringspeak, the principle of superposition). Well, mathematically A and B are nice. But what do they say about the physics of the situation?

    Initial conditions
    If x(t)=Acos(t)+Bsin(t), then A=x(0) and B=x´(0). Therefore A represents the initial position (really, displacement from equilibrium) and B represents the initial velocity of the mass. What do cos(t) and sin(t) represent? Since cos(0)=1 and cos´(0)=-sin(0)=0, somehow cos(t) in this situation represents and initial position solution to x´´(t)=-x(t): we could call it xpos(t). And sin(t) has sin(0)=0 and sin´(0)=cos(0)=1, an initial chunk of velocity. And so maybe in this situtation we could call cos(t) the solution xvel(t), an initial velocity solution. If we needed to do lots of solutions to x´´(t)=-x(t) with many varied initial positions and velocities, the formula Axpos(t)+Bxvel(t) might be useful.

    Algebraic digression: simple harmonic motion
    I didn't do this in class, mainly because I find algebra painful. The algebra following is really motivated by physical considerations, so maybe that excuses it.
    Look at Acos(t)+Bsin(t). I can rewrite this in a way which some people find more appealing. I'll multiply and divide by sqrt(A2+B2). The result is

                     A                  B
    sqrt(A2+B2)[----------·cos(t)+ ------------·sin(t)]
                sqrt(A2+B2)         sqrt(A2+B2) 
    There are some funny numbers appearing: A/sqrt(sqrt(A2+B2)) and B/sqrt(A2+B2). These numbers have squares which sum to 1, and therefore there is a right triangle with hypoteneuse of length=1 which has them as legs. Also therefore there is an angle, theta, so that sin(theta)=A/sqrt(A2+B2) and cos(theta)=B/sqrt(A2+B2). This is "just" from triangle geometry. Look at the picture. Then
    We have rewritten x(t) as a magnitude, sqrt(A2+B2), multiplied by sin(t+theta). This is a sine "wave" with a phase angle, theta, retarding it. So the motion, x(t), goes up and down, and the largest deviation from equilibrium is sqrt(A2+B2) and the phase angle is arctan(A/B).

    I gave Maple A=3 and B=7, and the result, 3sin(t)+7cos(t), is plotted here. To me the fact that simple harmonic motion is the result is not totally obvious. The magnitude, sqrt(A2+B2), is about 7.62, and the phase angle, arctan(A/B), is about .40.

    Other solutions?
    I told students that I knew another solution to x´´(t)=-x(t) besides Acos(t)+Bsin(t). Being clever rascals (RASCAL: "One that is playfully mischievous") they refused to believe me. I said, I have a secret solution, W(t), which satisfies W´´(t)=-W(t) and W(0)=3 and W´(0)=-4. They asserted that this W(t) would have to be 3cos(t)-4sin(t). So we compared them. Here is one way to compare: form a function by taking the difference, and then look at this difference. So we defined:
    C(t)=W(t)-(3cos(t)-4sin(t)). What do we know about C(t)? Well, C´´(t)=-C(t). This is not entirely obvious, but it comes from subtracting the equations
    W´´(t)=-W(t) and (3cos(t)-4sin(t))''=-(3cos(t)-4sin(t)).
    These equations are true because both functions are solutions of the spring motion equation. What else do we know?
    C(0)=W(0)-(3cos(0)-4sin(0)). I assumed that W(0)=3, so C(0)=0.
    Also, C´(t)=W´(t)-(3cos(t)-4sin(t))´ so C´(0)=0.
    This shouldn't be so strange since we created C(t) to compare solutions with the same initial velocity and position. Now C(t) is supposed to describe the motion of a spring when the initial displacement and the initial velocity is 0. I believe that under those conditions the ideal spring will not move, and will not move even forever. Why, either from the physical or mathematical points of view, should this be so?

    Conserved quantities
    I know that the kinetic energy of a mass, m, (recall, m=1 here) is (1/2)mv2. Therefore the kinetic energy of the mass on the spring is (1/2)x´(t)2. The potential energy is equal to the amount of work which must be done to get something into a position. Now to push the mass into a displacement of x(t) from equilibrium, we must push against the Hooke's law force of -kx(t). But that force varies with distance. We did some problems like this when we discussed work. What you do is multiply kx(t) by dx, a tiny distance in which the force hardly varies, sum, take limits, and, hey, we end up with 0x(t)kx dx and this is (1/2)kx(t)2. Also, k should be 1. Also we lost the minus sign because we are pushing against the spring. Wow!
    The total energy of this ideal and isolated spring is the sum of the potential and kinetic energies, so it is TE(t)=(1/2)x´(t)2+(1/2)x(t)2. I wonder if this energy changes over time?
    The picture is supposed to show you the kinetic energy and the potential energy separately. Indeed, it turns out that the total is a constant. Read on!

    The math person takes the energy and runs ...
    Now forget the previous paragraph, and consider the TE(t) associated with the function C(t) which has these properties:
    C´´(t)=-C(t) and C(0)=0 and C´(0)=0.
    Take d/dt of TE(t)=(1/2)C´(t)2+(1/2)C(t)2, and get (minding the Chain Rule!) TE´(t)=(1/2)2C´(t)C´´(t)+(1/2)2C(t)C´(t). More than one student noticed that this is the same as TE´(t)=C´(t)[C´´(t)+C(t)]. But the quantity inside the []'s is 0 since the spring equation is satisfied. Therefore TE´(t)=0 for all t (this is a version of conservation of energy). That means TE(t) is constant. But TE(t)=(1/2)C´(0)2+(1/2)C(0)2, so (using the initial conditions we have) TE(0)=0 so that TE(t) is always 0. Hey, that means C(t)2 is always 0, so C(t)=0 always, so W(t)-(3cos(t)-4sin(t))=0 for all t so W(t)=3cos(t)-4sin(t) and I must have been mistaken: the W(t) I thought was different is exactly the same as the solutions I knew before.

    Therefore ...
    In some sense we have a perfect Newtonian description of this very simple system. The initial conditions of position and velocity determine the motion of the spring for all time, using the differential equation to describe how the spring "evolves" through time. The key to realizing that we had all solutions and therefore had described all of the legal motions of the system was examination of the total energy of the system.

    If I had not set k and m equal to 1 I think I would have needed to look at cos(sqrt{m/k}t) and sin(sqrt{m/k}t) as solutions of mx´´(t)=-kx(t).

    I also asked what would happen if we changed x´´(t)=-x(t) to x´´(t)=x(t). Some suggestions were made. And, actually, et and e-t are solutions. And, indeed, if A and B are constants, then Aet+Be-t are solutions. I asked what the initial position and velocity solutions were. That is, can we find A and B so that if x(t)=Aet+Be-t, then x(0)=1 and x´(0)=0. With some thought, we got A=1/2 and B=1/2. Huh. The initial velocity solution, with x(0)=0 and x´(0)=1, has A=1/2 and B=-1/2. Indeed. So: for this equation, x´´(t)=x(t), xpos(t)=(1/2)(et+e-t). This is the hyperbolic cosine, called cosh(t) ("kosh of t"). And xvel(t)=(1/2)(et-e-t). This is the hyperbolic sine, called sinh(t) ("cinch of t").

    Diary entry in progress! More to come

    Material related to what is discusssed today is in chapter 9 of the textbook, but I hope you will be ready tomorrow to discuss any questions you may have about problems from the remainder of chapter 7.

    Monday, October 10

    Exam news

    Today's class
    Two formulas, one for arc length and one for (lateral) surface area. I covered this material in a very cursory fashion, because I don't think I really have much to add to what any textbook says.
    Word of the day cursory
    hasty, hurried.

    Diary entry in progress! More to come

    The definite integral computes ...
    Cut apart, approximate, sum, take limits.

    Diary entry in progress! More to come

    A formula for arc length

    Diary entry in progress! More to come

    Testing the formula

    1. Straight line segment
    2. Circular arc
    3. Parabolic arc

    Diary entry in progress! More to come

    Computational defects of the arc length formula

    1. A random example
    2. A textbook example
    What's going on?

    Diary entry in progress! More to come

    (Lateral) surface area

    Diary entry in progress! More to come

    Testing the formula

    1. The cone
    2. The sphere
    3. The torus (with a check from Pappus!)

      Diary entry in progress! More to come

    Similar defects
    Quote a textbook example. General comment.

    Diary entry in progress! More to come

    Here is a volume which can be filled with paint but which can't be painted.
    What does this mean?

    Diary entry in progress! More to come

    Please be ready tomorrow to discuss any questions you may have about problems from the remainder of chapter 7. You may wish to read about the material we discussed today in sections 8.1 and 8.2 of the textbook.

    Thursday, October 6

    Gravity and the infinite wire
    So suppose I have a small mass, m, and a straight line: a homogeneous infinite wire, of constant density. What is the gravitational attraction between the wire and the mass? Shouldn't the force be infinite since the wire has infinite mass? Well, no, because the wire's effects begin to fade with distance. But let us be more precise.
    Put a coordinate system on the wire, whose origin will be the closest point to the external mass. The distance between the wire and the mass will be called A. Cut up the wire into thin dx-long pieces. Assume the density of the wire is K. Then the dx-slab of wire has mass Kdx, and the attraction between the mass and the wire is GmK dx/L2, where L is the distance between the piece of wire and the mass. Notice that force's direction is along the hyponteneuse of the triangle. We only need the component "up", because the left/right part of the force is exactly canceled by a symmetric piece of wire on the other side of 0. We should multiply the magnitude by the sine of theta to get that component: notice that this sine is A/L. So the component of the force is [GmK dx/L2]·(A/L) which is (GmKA/L3) dx. Since L=sqrt(x2+A2), and we should add up all the pieces of the force (?) the actual total force is -infinityinfinity(GmKA/(x2+A2)3/2) dx.

    Computing the force
    It was recognized in class (note the passive voice!) that this integral can be computed with a trig substitution. But first we used the substitution Ax=t so A dx=dt and (x2+A2)3/2)=A3(t2+1)3/2. The limits amazingly stay the same: as x goes from -infinity to +infinity so does t (this is one nice thing about the improper integral). The force is (another A cancels out due to the top part of the sine fraction) (GmK/A)-infinityinfinity(1/(t2+1)3/2) dt.
    Now finish by computing -infinityinfinity(1/(t2+1)3/2) dt (which I don't need -- you may not notice but the point of the computation is already done!). So take t=tan(w), then -infinity=t and +infinity=t become w=-Pi/2 and w=Pi/2 and dt=(sec(w))2dw and (1/(t2+1)3/2) becomes 1/(sec(w))3 and the integral becomes -Pi/2Pi/2cos(w) dw which is 2. The force is then (2GmK/A).
    The wonderful fact is that the force between the wire and the external mass is now inverse first power, starting from an inverse square law of attraction. This is very neat to me. The reason why I claimed earlier that the "point of the computation is already done" is that I wanted to show the force was inverse first power. The particular constant of proportionality is not as interesting to me here.

    Angstroms and molecules
    Any inverse square law works the same. So if one has a tiny molecule maybe attracted by some force (charge, for example) to a big molecule which is sort of straight, the computation shows that the attraction should be approximately inverse first power. The improper integral does closely approximate the real thing, because the "edges" (far away from x=0) don't usually matter very much. Neat, neat, neat.

    Back to Pi
    My exposition of the famous Gaussian integral is here. This integral is connected to the Central Limit Theorem, one of the most remarkable results in probability and statistics. Please look up this result on the web. There are many animations demonstrating it.

    The other kind of improper integral
    We have been looking at improper integrals where there is a defect in the range: the range is infinite. There are also improper integrals where the defect is in the domain. Please remember these integrals:
    56201[1/x5] dx=-1/(4x4)]56201=-1/(4{201}4)+1/(4{56}4)
    56201[1/x1/5] dx=(5/4)x4/5)]56201=(5/4){201}4/5)-(5/4){56}4/5)
    Now change 56 to B, where B is a positive number less than 201.
    B201[1/x5] dx=-1/(4x4)]B201=-1/(4{201}4)+1/(4{B}4)
    B201[1/x1/5] dx=(5/4)x4/5)]B201=(5/4){201}4/5)-(5/4){B}4/5)
    Now investigate what happens as B-->0+. The first integral, with 1/x5 as integrand, goes to +infinity. So we will say that
    The improper integral 0201[1/x5] dx diverges.
    As B-->+infinity, the value of the second integral with integrand 1/x1/5, approaches (5/4){201}4/5). So we will say that
    The improper integral 0201[1/x1/5] dx converges and its value is (5/4){201}4/5).

    A failure of physical theory?
    The repulsion between two protons is inverse square. So how much work would be required to "push" two protons together to form another atom with two protons in its nucleus? If we compute this work using the improper integral that simple theory would require, then we'd have something like 0someplace[1/x2] dx and this integral would diverge: an infinite amount of work would be required. So maybe we need to change theories. Maybe the neutrons "mediate" in some way, or maybe the inverse square law of attraction breaks down at really small (nuclear) dimensions, or maybesome other sort of theory is needed. I don't know.

    Inverse powers
    We considered which integrals of the form 0201[1/xSTUFF] dx would converge and which would diverge. Here were the conclusions we reached:
    If STUFF<1, then the integral would converge.
    If STUFF>1, then the integral would diverge.
    When STUFF=1 we had to consider the integral separately, because the antiderivative was no longer a simple power of x, but was ln(x). We concluded that
    If STUFF=1, then the integral would diverge.
    This is exactly the reverse (except for the borderline case of 1) of what happened in the other case. Also here is a picture. The picture doesn't help me much at all. Oh well. I like pictures.

    Just one more integral
    Let's look at 01ln(x) dx. I know that ln(x)-->-infinity as x-->0+. Is there a finite amount of "area" enclosed between the y-axis, the x-axis, and this curve?
    Let's integrate by parts.

     u=ln(x)   du=(1/x)dx 
    dv=dx      v=x
    Suppose B is positive and close to 0. Then So B1ln(x) dx=x·ln(x)]B1-B11 dx=x·ln(x)-x]B1. When x=1 we get just -l since ln(10=0. What about B·ln(B)-B? As B-->0+, the -B term -->0. But ln(B)-->-infinity, and B-->0. Which one "wins" in the B·ln(B) computation? Well, just as exponentials go faster, logs go slower. To see that the limit of B·ln(B)-->0 as B-->0+, I will need to rearrange things as a fraction so that we can take advantage, again, of L'Hopital's rule. So:
    B·ln(B)= -----
    As B-->0+ this has the infinity/infinity form, so its limit (by L'H) will be the same as the limit of (1/B)/[-1/B2]=B2/B=B. But that's 0. So 01ln(x) dx converges and its value is -1.
    This is not too surprising since 0infinitye-x dx was computed earlier and its value was +1, and the geometric areas are the same, just flipped over y=x.

    Exam news

    Next Thursday
    I hope that Mr. Scheinberg will discuss problems from sections 7.4, 7.5, 7.7 and 7.8. progress!

    Please do problems from the remainder of chapter 7.

    Wednesday, October 5

    I'm starting another section of the diary because the file is large and taking too long to load. Oh well. This lecture introduced the idea of improper integrals. I began in a characteristically lackadaisical fashion.

    Word of the day lackadaisical

    1. Lacking spirit, liveliness, or interest; languid.
    2. idle or indolent especially in a dreamy way.
    I asked for the computation of two integrals.

    Integral #1
    56201[1/x5] dx=-1/(4x4)]56201=-1/(4{201}4)+1/(4{56}4)
    Please notice that this integral is positive, as it should be. I have attempted to give a rough qualitative sketch of a region in the plane whose area is computed by the integral.

    Integral #2
    56201[1/x1/5] dx=(5/4)x4/5)]56201=(5/4){201}4/5)-(5/4){56}4/5)
    Please notice that this integral is positive, as it should be. I have attempted to give a rough qualitative sketch of a region in the plane whose area is computed by the integral.

    Nasty comment department
    Well, yeah, the two pictures are the same, but I emphasized that the drawing was just a tiny qualitative help to us. Why, Ms. Johnson agreed that I accidentally left off any marks on the vertical axis so of course both pictures are valid.

    Integral #1, again
    Suppose A is a large positive number. Then 56A[1/x5] dx=-1/(4x4)]56A=-1/(4{A}4)+1/(4{56}4)
    Now as A-->infinity, this "area" seems to approach a limit. The value of that limit is 1/(4{56}4). We will say that 56infinity[1/x5] dx converges and that the value of this improper integral is 1/(4{56}4). Please notice that this integral is positive, as it should be. I have attempted to give a rough qualitative sketch of a region in the plane whose area is computed by the integral.

    Integral #2, again
    Suppose A is a large positive number. Then 56A[1/x1/5] dx=(5/4)x4/5)]56A=(5/4){A}4/5)-(5/4){56}4/5)
    Now as A-->infinity, this "area" seems to get larger. It certainly does not approach a finite limit. We will say that 56infinity[1/x1/5] dx diverges (or does not converge).

    Both of the regions from 56 "out to infinity" look maybe something like what I've drawn. Although I love pictures, I can't tell by looking that one of these regions "has" finite area, and the other one does not. The phenomenon seems to be subtle.

    We discussed the {con|di}vergence of the following integrals (or something like them):

  • 56infinity236/x5 dx: even though this is "larger" than the convergent integral, it also converges, and its value is 238 times the value of the convergent integral.
  • 56infinity1/(x5+44x2) dx. This integral also converges. The region is smaller than the region we analyzed before, because the 44x2 in the denominator (bottom, darn it!) makes the height lower, but still positive. The value of this integral might be difficult to determine, but I think the value of the integral is positive, and less than 1/(4{56}4).

    Well, this all seems slightly silly. But it isn't because improper integrals arise frequently in applications and sometimes are much easier and more important to compute than standard definite integrals.

    Escape ..
    When I was very young, I read a science fiction novel by Robert Heinlein which stated "... the escape velocity from the Earth is 7 miles per second ..." and now I would like to sort of verify this using only well-known (?) facts and some big ideas of physics.

  • The radius of the earth
    Well, a sketch of the (continental) United States is shown to the right. There are 4 time zones. The U.S. is about 3000+ (maybe 3200?) miles wide. Therefore one time zone is about 1000 miles wide (I think the Pacific time zone actually slops a bit into the ocean), and since there are 24 time zones around the world, the circumference of the world is ... uhh ... about 25,000 miles. Or so. And therefore the radius of the earth is that divided by 2Pi, and therefore the radius of the earth is about 4,000 miles.
  • Newton and gravitation
    Two masses attract each other with a force whose magnitude is proportional to the product of the masses and inversely proportional to the square of the distance between them. Therefore, if I have a mass, m, and if the Earth has mass M, the magnitude of the force of gravity is GmM/r2. G is a constant.
  • Work lifting up
    Suppose we want to lift a mass m from the surface of the earth to a distance R, where R is very large. Then the work done is force multiplied by distance. The force needed to act against gravity certainly changes with distance. So I will compute the work with calculus. If x is some distance between 4,000 and R, then the force is GmM/x2. If the distance is a little bit, say, dx, then the work dW needed is GmM/x2 dx. The total work, W, is x=4,000RGmM/x2 dx which I can compute readily as -GmM/x]4,000R=GmM({1/4,000}-1/R). I think I did the minus signs correctly. Notice that as R-->infinity, this work -->GmM/4,000: this is the most work you can do, to get anywhere in the universe (assuming the universe is empty except for the mass m and the earth, of course).
  • Kinetic energy
    How much kinetic energy would we need to supply to the mass m so that it would equal the potential energy the mass would have if it were lifted out to anywhere in the universe? Well, kinetic energy is (1/2)mv2 and that potential energy we already computed is GmM/4,000. So (1/2)mv2=GmM/4,000, and thus v2=2GM/4,000. But what is GM?
  • But F=ma
    On the earth, a, the acceleration of gravity, is 32 ft/(sec)2. Yes, this is an archaic system of measurement, but that's part of the fun. But also F=GmM/(4,000)2. So GmM/(4,000)2=32m. Therefore GM=(4,000)2·32.
  • And the answer is ...
    v2=2GM/4,000=2[(4,000)2·32]/4,000=8,000·32= (256,000)/(5,280)=(approximately)50. The 5,280 came from converting feet to miles. Therefore v, the escape velocity, is about 7. I think this computation is so silly that it is cool.

    Inverse powers
    We considered which integrals of the form 56infinity[1/xSTUFF] dx would converge and which would diverge. Here were the conclusions we reached:
    If STUFF>1, then the integral would converge.
    If STUFF<1, then the integral would diverge.
    When STUFF=1 we had to consider the integral separately, because the antiderivative was no longer a simple power of x, but was ln(x). We concluded that
    If STUFF=1, then the integral would diverge.

    Exponential decay versus polynomial growth
    I asked which other improper integrals might converge. Well, exponential decay was suggested. So, of course 0infinitye-xdx converges. We computed this as the limit of 0Ae-xdx as A-->infinity, and we got 1 as the answer. I wondered if, say, 0infinityx56e-xdx converged?

    Thinking about how x56e-x behaves when x gets large leads to the question: which of x56 and ex gets bigger faster? So I answer this:

     lim     x56            lim    56x55 [   After ]       lim   const
    x-->inf ---- =(L'hop) x-->inf ---- = [  several  ]= x-->inf ----
             ex	               ex    [uses of L'H]            ex
    So this limit must be 0. This is important. For example, consider the function f(x)=x3,124e-.00002x. Here there is an enormous power of x, multiplied by a very slightly (?) decreasing exponential. Let me call upon something that can compute better than I can:
    > f:=x->x^3124*exp(-.00002*x);
                            f := x -> x     exp(-0.00002 x)
    > f(10);
                                  0.9998000200 10
    > f(100);
                                  0.9980019987 10
    > f(10^10);
                                 0.1269460960 10
    So f(10) is large, and f(100) is even larger. And f(1010) is very, very, very small, indeed. In many applications, exponential growth and decay occur, and they eventually "win" over what might seem huge competition.

    Some more improper integrals
    We had already computed 0infinitye-xdx=1. Now I computed 0infinityxe-xdx. I used (of course) integration by parts. Here the parts were:

     u = x     du=dx             
    dv = e-xdx v=-e-x
    The boundary term is (x)(-e-x)]0A (as A-->infinity). When x=0 this "disappears" because of the first factor. And Ae-A-->0 because exponential decay is faster than any polynomial growth. The minus signs cancel, and we seem to see that 0infinityxe-xdx converges and its value is 1.

    I wonder what 0infinityxne-xdx is? Are all of the values equal to 1? What's the pattern?

    Go whole hog
    "To engage in something without reservation or constraint"
    "If you go the whole hog, you do something completely or to its limits."
    "To carry out or do something completely."
    I will do all of these computations at the same time. Suppose n is a positive integer. Define In to be 0infinityxne-xdx, which will be a convergent improper integral. What is its value? So I will integrate by parts.

     u = xn     du=nxn-1dx             
    dv = e-xdx v=-e-x
    The boundary term here is (xn)(-e-x)]0A (as A-->infinity). When x=0 this "disappears" (for n positive integer!) because of the first factor. And Ane-A-->0 because exponential decay is faster than any polynomial growth. The minus signs cancel, and we see that knowing that the integral In-1 converges implies that the integral In converges, and In=n·In-1

    So these integrals are ... (!!!)
    We know these facts. If In=0infinityxne-xdx, then
    Well, I happen to know some other numbers which obey these rules: they are the factorials. Therefore In=n! for all n. Officially this is a proof by a technique called
    mathematical induction.
    Maple knows these integrals. Look, 120 is 5!:

    > int(x^5*exp(-x),x=0..infinity);

    And therefore one-half factorial is ...
    So the integral expression 0infinityxne-xdx can be used to define n! for n's different from positive integer n.

    > int(sqrt(x)*exp(-x),x=0..infinity);
    So (1/2)! is sqrt(Pi)/2. Not clearly!!!!

    Maintained by and last modified 9/11/2005.