On Monday I hope that students will do problems from two previous final exams. I will have office hours on Friday, December 16, and Monday, December 19, and be available for a review session from 7 to 9 PM on Tuesday, December 20. The final exam is at noon on December 21 in Hill 525.
I got a formula for area and checked it sort of on a circle. The
formula was derived by looking at circular sectors and seeing how much
area they had.
_{STARTING THETA}^{ENDING THETA}(1/2)r^{2} dtheta
Then I considered a spool of thread with 50 yards of thread. We found the length of a cardioid, r=A(1sin(theta)). We needed to use a trig identity. I think it was 4A, so the thread could enclose a cardioid with A=12.5. We then found the area of such a cardioid, and this was A^{2}(3Pi/2). With A=12.5, we saw that such a spool would make a very large Valentine!
HOMEWORK
Read the text on
parametric curves and do problems, please. Read the text on polar
coordinates and do problems, please. Also I will ask Mr. Scheinberg to answer questions about
these four sections of the text.
Here is information about the final
exam. Please think about when we should have a review session or I
should have office hours. I hope students will do problems from the
last two final exams I gave in Math 192 on Monday, the last class.
He showed some parametric curves from book Normal Accidents by Charles Perrow: pictures of ship collision tracks.
I described my "favorite" parametric curve: (x=1/(1+t^2),y=t^3t).
I wanted to do calculus with parametric curves. The object was to get formulas for the slope of the tangent line to a parametric curve and for the length of a parametric curve.
If x=f(t) and y=g(t), then f(t+delta t) is approximately f(t)+f'(t)delta t +M (delta t)^2 (this is from Taylor's Theorem). Also g(t+delta t) is approximately g(t)+g'(t)delta t +M (delta t)^2. The reason for the different "M" is that these are different functions to which Taylor's Theorem is being applied. Then we analyze the slope of the secant line, (delta y)/(delta x): this is (g(t)+g'(t)delta t +M g(t))/(f(t)+f'(t)delta t +M (delta t)^2) and we see that the limit as delta t > 0 must be g'(t)/f'(t), and > this must be the slope of the tangent line, dy/dx. The formula in the book, developed in a different way using the Chain Rule, is also nicely stated in classical Leibniz notation as (dy/dt)/(dx/dt).
I applied this to find the equation of a line tangent to my favorite curve, x=t^3t, y=1/(1+t^2), when t=2. I found a point on the curve and I found the slope of the tangent line, and then found the line. It sloped slightly down, just as the picture suggested. Then I found the angle between the parts of the curve at the "selfintersection", when t=+/1. This involved some work with geometry and with finding the slopes of the appropriate tangent lines. The best we could do is with some numerical approximations to the angles.
I asked people what the curve x=sin(t)+2cos(t), y=2cos(t)sin(t),
looked like. We decided it was bounded because sine and cosine are
always between 1 and +1, so the points of the curve must be in the
box 3
I remarked that for "well=behaved" parametric curves, vertical tangents occur when dx/dt=0, and horizontal tangents, when dy/dt=0.
Then I briefly discussed the arc length between (f(t),g(t)) and (f(t+delta t),g(t+delta t)). With the approximations f(t+delta t)=f(t)+f'(t)(delta t) and g(t+delta t)=g(t)+g'(t)(delta t) we saw that this distance (using the standard distance formula) was approximately sqrt(f'(t)^2+g'(t)^2) delta t. We can "add these up" and take a limit. The length of a parametric curve from t=START to t=END is _{START}^{END}sqrt(f'(t)^2+g'(t)^2) dt. The integrand (the function integrated) is frequently called the speed: sqrt(f'(t)^2+g'(t)^2).
We applied this to find the length of the ellipse whose bounding box we investigated. This turned out to be what's called an elliptic integral. Elliptic integrals first occurred when people wanted to find the length of ellipses. The length can't be written in terms of "elementary" functions. The integrals also happened to arise in certain physical computations. Therefore these functions were analyzed, and, before computers, tables were constructed. Sigh. Yet another function (actually, another bunch of functions since it turns out there are several kinds of elliptic integrals).
The instructor began considering polar coordinates.
Go to the old dead tree at dawn. Then at sunrise, walk fifteen paces in a northnortheast direction from the tree. Dig there for treasure. 
We tried to understand this, and decided that it represented locating a point in the plane with reference to a fixed origin, "the pole", and a fixed direction, the polar axis. Ther origin was the dead tree and the fixed direction was sunrise. Thus what is specified is the distance from the tree (r=10) and the angle from the sun's direction, assumed east (theta=Pi/2+Pi/8=(5Pi/8). Of course in the purported text I quoted from it would be forgotten that the sunrise's direction changed with the season, or that there was no treasure but a booby prize, or the steps were taken by a giant or a dwarf or ... many things.
I introduced polar coordinates and got the equations relating (x,y) to (r,theta). I looked at a point in rectangular coordinates and saw that there were infinitely many pairs of polar coordinates which described the position of this point (I gave examples). I remarked that in many applications and computer programs, there were restrictions on r (usually r is nonnegative) or on theta (say, theta must be in the interval [0,2Pi) or in the interval (Pi,Pi] or something) in an effort to restore uniqueness to the polar coordinate representations of a point's location. No such restrictions were suggested by the text used in this course.
Polar coordinates are useful for looking at things with circular symmetry. For example, the equation r=3 is much easier to contemplate than the equation x^2+y^2=9.
I gave equations relating polar coordinates to rectangular coordinates and vice versa. We will sketch some polar curves on Wednesday, when calculus in polar coordinates will also be skimmed. Sigh.
The instructor discussed one problem from workshop #5. Sigh.
HOMEWORK
Read the text on
parametric curves and do problems, please. Read the text on polar
coordinates and do problems, please. Also I will ask Mr. Scheinberg to answer questions about
these four sections of the text.
Here is information about the final
exam. Please think about when we should have a review session or I
should have office hours. I hope students will do problems from the
last two final exams I gave in Math 192 on Monday, the last class.
We began a discussion of parametric curves.
I first studied the "unit circle" (x=cos t, y=sin t) and discussed how this compared with x^2+y^2=1. There is more "dynamic" information in the parametric curve description, but there is also more difficulty: two equations rather than one.
Students sketched (x=cos t,y=cos t) and (x=t^2,y=t^2) and (x=t^3,y=t^3): all these are pieces of the line y=x: I tried to describe the additional dynamic information in the parametric definitions.
Then I tried to sketch the parametric curves (x=2 cos t, y = 3 sin t) and (x=1+sin t, y= cos t 3) which I got from another textbook. The first turned out to be an ellipse whose equation is (x/2)^2+(y/3)^2=1 and the second turned out to be a circle of radius 1 centered at (1,3). These geometric curves intersect, but do they actually describe the motion of particles which have collisions? Well, one intersection is a collision and the other is not (this is generally not too easy to see!).
This is a picture of the kinetic aspect of the situation just described. This image is a quarter of a megabyte and may take a while to load!
I tried to describe the parametric description of a cycloid. This is done with a slightly different explanation in section 10.1 of the text.
HOMEWORK
Read the text on
parametric curves and do problems, please. Also I will ask
Mr. Scheinberg to be prepared to answer
problems about parametric curves and polar coordinates. If you prepare
for this by looking at these sections, maybe we can devote the last
class meeting to some review.
SUM_{n=1}^{8}sin(nx)/2^{n}  SUM_{n=1}^{100}sin(n!x)/n^{2}. 

A long trip to explain the smooth picture
I then embarked on a (seeming!) digression to explain the first
picture.
Ways of looking at the plane
As pairs of real numbers
As twodimensional vectors
As complex numbers
Complex numbers
Discussion of complex number addition, multiplication, and even
addition.
On the way, definitions given about the real part and
imaginary part of a complex number, its complex
conjugate, and the modulus of a complex number (distance to
origin). The last generalizes the absolute value of a real
number.
Very quickly: convergence of a complex sequence as determined by the
modulus of the difference between the sequence elements and the limit
getting (and staying!) small.
Euler's formula
Insert iy into the Taylor series centered at 0 for the exponential
function. Notice the behavior of powers of iy, etc. Collect and gather
terms. Obtain:
e^{it}=cos(t)+i sin(t) and cos(t)=[e^{it}+e^{it}]/2 and sin(t)=[e^{it}e^{it}]/(2i) 

Things we can learn from this
Engineers and sums
The derivative of
SUM_{n=1}^{infinity}sin(nx)/2^{n}
should be
SUM_{n=1}^{infinity}n cos(nx)/2^{n}
and that sum converges almost as fast (?) as the original sum. So the
original sum should be differentiable (it wants to be) and this
sum should be the derivative.
Now consider
SUM_{n=1}^{infinity}sin(n!x)/n^{2}.
Here the amplitudes, sin(n!x), conceal a whole lot of wiggling. If the
derivative existed, then, well, it should want to be
SUM_{n=1}^{infinity}n! cos(n!x)/n^{2}.
But this series is way too divergent. I bet that this series
doesn't converge for many x's and that the function represented by the
original n! sum is not differentiable for many x's. That is
indeed true. A verification is tedious and somewhat intricate and too
difficult for me here. (By the way, it is differentiable at, say, some
local max's and min's with derivative=0.) The picture looks very
jagged. In fact, lots of motions in "real life" look like
this. See here for a
general discussion of Brownian motion, which is probably better
described by statistical ideas than by traditional calculus. Brownian
motion is an important physical model, and was treated by Einstein in
one of his epochal papers written in 1905. See Einstein 1905: The
Standard of Greatness by John S. Rigden (less than $15 on
Amazon).
A horrible function
I devoted the remainder of my time to studying the following
function:
f(x)=e^{1/x2}
if x is not 0, and f(0)=0.
This turns out to be a very
weird and very nonclassical function. It would not be "believed" by
mathematicians before the 20^{th} century. We "graphed" y=f(x)
first by thought: no values of the exponential function can be
negative (or even 0). But why should f(0) be defined as 0? Well, if
x>0 in the formula e^{1/x2} then the exponential
function has as input a large negative number, which means its value
will be very small positive. Therefore it is "natural" (if the
defined function is to be continuous!) to define f(0) to be 0. If x is
not 0, the function has values less than 1 because it has values of
the exponential function for negative inputs. And when x gets large,
the function gets close to 1, because exp's values for inputs which
are small negative are very close to 1. Here is a picture of y=f(x)
with its horizontal asymptote. Note, please, the axes: x is
between 10 and 10, and y is more or less between 0 and 1.
The most interesting aspect of this function is its behavior near 0
which doesn't show up much in the picture above. So here is a
picture with x between .1 and .1, again distorted to fill a square.
This function's graph is flat near 0. In fact, most people who
think about this function state that it is infinitely flat near
0.
Differentiability of f(x)
I tried to argue that f(x) was differentiable, not only away from 0
where the derivative is given by a simple formula, but at 0, where we
needed to use the definition of derivative and evaluate the resulting
limit with L'Hopital's rule. Indeed, it turns out the any
derivative of f(x) is given by something like this:
For x not 0, a formula of the form
e^{1/x2}·[polynomial in (1/x)],
for x=0, just 0.
I didn't verify this very well, but it is true.
So that ...
The Taylor series of f(x) centered at x=0 is the zero power series
(all the coefficients are 0). f(x) is equal to its Taylor series
only at x=0.
Nausea
This would have made many classical mathematicians (and even
physicists!) of the 19^{th} century very upset. They wanted to
believe that functions more or less were the same as their power
series expansions whenever possible, because then everything would be
computable, etc. Well, that belief is false. And it turns out
that such functions as this f(x), which might have been spurned
by all those people, are very useful in studying partial differential
equations.
Word of the day: spurn
HOMEWORK
Please prepare
problems from the last three sections on power series for Thursday.
HOMEWORK
Please prepare
problems from the last three sections on power series for Thursday.
We reviewed for the exam. In particular, I attempted to show how comparison with geometric series would allow estimation of the sums of various series.
Sums of series
I used Taylor's theorem to show that the series mentioned above
converge where appropriate with sums equal to the functions they
represent. That is:
We know that f(x)=T_{n}(x)+R_{n}(x). When does
R_{n}(x)>0 as n>infinity?
For any real number x, the remainders>0 as n>infinity in the series for e^{x} and sin(x) and cos(x).
If 1<x<1, the remainders>0 as n>infinity in the geometric series, the series for log(1+x), and the series for arctan(x).
A power series centered at a is an infinite series, SUM_{n=0}^{infinity}c_{n}(xa)^{n}. Most of the time (almost all of the time, actually) we will have a=0. In this sum, I will understand that the n=0 term when x=a, which symbolically is c_{0}(aa)^{0} or c_{0}0^{0} , will be understood to mean just c_{0}. So, for power series, 0^{0} means 1.
Suppose we call S(x) the sum of the power series where the series converges: inside its interval of convergence. The series must converge absolutely at any point inside the interval of convergence which is not a boundary point (you can't be sure at boundary points). This is all proved by comparing the series to wellchosen geometric series, as I tried to indicate last time. Some other results follow in a similar way but the verifications, while not changing in nature, do get lengthier.
If S(x)=SUM_{n=0}^{infinity}c_{n}(xa)^{n} and if r is the radius of convergence of the sum, then:
Example 0
If
f(x)=SUM_{n=0}^{infinity}c_{n}(xa)^{n},
then I found using repeated differentiation and substitution, that
c_{n}=f^{(n)}(a)/n! and therefore the pwoer series
must be the Taylor series of f(x) at x=a.
Example 1
Suppose f(x)=e^{x}. If e^{x} equals a
power series centered at 0, it must be the series
SUM_{n=0}^{infinity}x^{n}/n! .
Example 2
Suppose f(x)=sin(x). If sin(x) equals a power
series centered at 0, then it must be the Taylor series. But d/dx has
the "closed loop" sin(x)>cos(x)>sin(x)>cos(x) and back to
sin(x). When evaluated at 0, this gives 0, 1, 0, 1,
.... repeating. The power series must be
SUM_{n=0}^{infinity}(1)^{n}x^{2n+1}/(2n+1)!. This
at least deserves some explanation. The 1's to various powers cause
the terms to alternate. The 2n+1's cause only the odd terms to appear,
since sin(0)=0 the even terms are there but their coefficients are 0.
Example 2a
If f(x)=cos(x) is equal to a power series
centered at 0, it must be the series SUM_{n=0}^{infinity}(1)^{n}x^{2n}/(2n)!.
Example 3
If f(x)=1/(1+x), then the power series must be
SUM_{n=0}^{infinity}(1)^{n}x^{n}.
This is because we can think of 1/(1+x) as the sum of a geometric
series whose first term is 1 and whose ratio is x. Since there is
only one power series centered at point attached to a function, this
sereis must be the power series centered at 0.
Example 3a
What is the power series centered at 0 for
ln(1+x)? Well, if we differentiate ln(1+x) we get 1/(1+x), and that
has power series
SUM_{n=0}^{infinity}(1)^{n}x^{n}. So
antidfferentiate the series, and get
SUM_{n=0}^{infinity}(1)^{n}x^{n+1}/(n+1)+C. But
actually ln(1+x) at x=0 is ln(1+0)=0, so C is 0. And ln(1+x)'s power
series centered at 0 is the series written with C=0. Notice that this
series has radius of convergence 1. The function it is related to,
ln(1+x), has rather horrible misbehavior at x=1 (it tends to
infinity), so maybe that explains the lack of convergence of the
series past the radius of 1.
Example 3b
arctan(x) is the integral of 1/(1+t^{2}) from 0 to x. But
(again, geometric series)
1/(1+t^{2})=SUM_{n=0}^{infinity}(1)^{n}(t^{2})^{n}=SUM_{n=0}^{infinity}(1)^{n}t^{2n}. Integrating
and substituting, we get the series
SUM_{n=0}^{infinity}(1)^{n}x^{2n+1}/(2n+1)
as the pwoer series for arctan(x) centered at 0. The radius of
convergence of this series is again 1, and now there is no clearly
discernible problem with the graph of arctan(x) near +1 or 1. The
lecturer tried to confuse the class with the idea that there is some
problem with arctan near +i and i, complex numbers. Huh.
Application #1
Suppose
f(x)=x^{3}e^{x4}. What is the
17^{th} derivative of f at 0?
Application #2
Write _{0}^{1}cos(x^{3}) dx as the sum
of a series. How many terms are needed to get accuracy up to
10^{10}?
Application #3
Discussion of payoff, of average winning, of average entrance fee to a
game. First for a game with only 3 options, then for a game of this
type: toss a fair coin repeatedly until the first tail. Pay n dollars
if the first tail is on the n^{th} toss. Then what is the fair
entrance fee (=average winning, =expection, etc.). This is essentially
finding the value of
SUM_{n=1}^{infinity}n/2^{n}. We found
this sum by taking the sum of 1+x+x^{2}+x^{3}+... (a
geometric series), then taking d/dx, then multiplying by x, then
setting x=1/2.
Insouciance
The word of the day.
carefree;
unconcerned
We considered power series. We found various examples of boundary
behavior by considering power series centered at 0 whose terms were
x^{n}/n! interval of convergence all real numbers
n!x^{n} interval of convergence [0,0]
x^{n} interval of convergence (1,1)
nx^{n} interval of convergence (1,1)
(1/n^{2})x^{n} interval of convergence [1,1]
(1/n)x^{n} interval of convergence [1,1)
I did some work with estimate of a power series inside its radius of convergence by geometric series. The purpose of this was to show students that the sum of a geometric series would always be continuous inside its radius of convergence.
The Oxford English Dictionary is a massive project (you can't really call it a book any more) which is, essentially, a historical dictionary of the English language. Access to the OED is free through Rutgers. The OED declares that phlegm first appeared in 1387 in written English. It was then spelled fleem (really). The spelling wandered a great deal in the next two hundred years. The ph replaced f and the h and g somehome ... arrived. The result is a word which is spelled very different from its modern pronunciation, a very strange word. But written English is generally strange.
Please see this page about Taylor's Theorem.
Geometric series?
I asked how to identify a geometric series, and we discussed that
for a while.
Example
Does
SUM_{n=1}^{infinity}n^{2}/3^{n}
converge? I looked at the ratios of successive terms, and, following a
suggestion of Mr. Townley we looked at
the infinite tail with n>=6. This allowed us, via a comparison test
and then a sum of a geometric series, to decide that the series
converged and even to get an overestimate of the sum. Indeed.
Statement of the Ratio Test
As in the text. For the pseries with p=2 (convergent!) and p=1/2
(divergent!) the Ratio Test returns the same result (the limit of
the ratios is 1). So if the limit is 1, we can tell nothing!
We did a number of examples. I issued several warnings stating that it was easy to make algebraic mistakes.
One example I asked if SUM_{n=1}^{infinity}2^{2n}/[n!(n+1)!] converged. After many cautions about possible errors, we decided it did converge. By the way here is some Maple opinion about this series:
> sum(2^(2*n)/(n!*(n+1)!),n=0..infinity); 1/2 BesselI(1, 4) > evalf(%); 4.879732577The first answer is to show that, indeed, everyone knows this is a Bessel function. Indeed. The next answer asks for an approximate value.
Here is a heuristic idea both the Ratio and Root Tests. Here's an appropriate definition for heuristic:
First, heuristic (adjective) means
1. allowing or assisting to discover.
2. [Computing] proceeding to a solution by trial
and error.
If a_n is approximately ar^{n}, that is like a geometric series, then the quotient a_{n+1}/a+n is approximately like (ar^{n+1})/(ar^{n})=r, and the approximation should get better as n>infinity. This gives a background for the Ratio Test. For the Root Test, if we assume that a_n is approximately ar^{n} and take n^{th} roots, then some algebra shows that (a_n)^{1/n} is approximately a^{1/n}r, and we know that lim_{n>infinity}a^{1/n}=1 if a>0, so we get the Root Test, similar to the Ratio Test but with (a_n)^(1/n) replacing a_{n+1}/a_n.
I did a sequence of wonderful problems, mostly from the book.
Dilogarithm
I looked at the power series
SUM_{n=1}^{infinity}x^{n}/n^{2}.
I asserted that, where this converged, it defined a function called
the
dilogarithm. Google gives about 43,600 pages for the
the dilogarithm. One page is written by an undergraduate at the
University of Texas. This young woman asserts that the dilogarithm is
"a cool function".
We used the Ratio Test to conclude that the dilogarithm series converge absolutely when x<1 but diverged when x>1. If x=1, the series converges because it is a pseries with p=2>1. If x=1 the series converges using the Alternating Series Test. So the series converges for all x in the interval [1,1], which is called the interval of convergence for this series.
If x<=1/2, I split the series (for no good reason) into two pieces:
SUM_{n=1}^{10}x^{n}/n^{2}+SUM_{n=11}^{infinity}x^{n}/n^{2}. (a finite sum, just a polynomial!) (an infinite tail!)I estimated the infinite tail:
In fact, we can now "look" at the graph of the sum. Well, I will look at the graph of the (polynomial) partial sum. The picture included here is that graph in black. I also asked Maple to shift the graph up by .01 and draw it in green, and down by .01 and draw it in blue. Hey, the real graph differs from the curve in black by much less than .01, really at most .000008. So I believe that what I am seeing is essentially the graph of y=dilogarithm(x).
HOMEWORK
Tomorrow we will have fun with many problems from the textbook.
These examples should convince you that life can be more complicated than first expected.
Example 1: harmonic series
Example 2: a geometric series
Logical confusion
Example 3: the alternating harmonic series
The Alternating Series test
The instructor shows a device with the help of "a student"
(Mr. Townley)
The image shown is from www.fotosearch.com/
where it is declared to be a "Royalty Free Photograph".
I wanted students to know that
SUM_{n=1}^{infinity}a_{n}<=
SUM_{n=1}^{infinity}a_{n}}.
Some definitions
Absolute convergence.
Conditional convergence.
A theorem
Absolute convergence implies conditional convergence.
Whose converse is not true
The alternating harmonic series.
An example
Exam warning
On Tuesday, November 22, which has the Wednesday class schedule.
We expanded the world by asserting that, in addition to demons and humans and angels, there were archangels. So:
Demons  Humans  Angels  Archangels  

Some typical inhabitants  37(ln(n))^{4} 3(ln(n))^{34}+9(ln(n))^{47}  n^{43}+8n^{3} n^{.000000001}  88e^{.0001n} 5e^{10n}+2^{n}  n! 2(4)(6)···(2n). 
In this hierarchy, every function of a family moving right eventually grows faster and is larger than every function of a family to its left.
Problem presentations
were made by Ms. Waters
and Mr. LaBouff. I tried to
advertise that their problems, although perhaps not as seemingly
intricate as others, were important in applications.
Comparison Theorem
I stated the text's major comparison result and tried a few problems
from 11.4. I stated that the major comparison ingredients were the
pseries and geometric series. Also convergence propagates from bigger
series to smaller series, while divergence propagates the other way. I
tried to quantify the problems: if a series converges, how many
terms are needed to approximate the sum within a certain tolerance? If
a series diverges, and the series has positive terms, how can we make
a partial sum bigger than some assigned number? The techniques are
again to compare with a geometric series or an improper integral, and
force the infinite tail of the series to be small or a partial
sum to be large.
More tomorrow!
Logic and logical words
Statements of the form, "IF P THEN Q" are called
implications. P is frequently called the hypotheses or
antecedent, and Q is called the conclusion or consequent. Other
related expressions are:
IF Q THEN P. This is called the converse.
IF {not Q} THEN {not P}. This is called the contrapositive.
IF (not P) THEN (not Q). This is called the inverse.
Example
The logic of series sometimes is rather "twisty" and it may sometimes be useful to consider the preceding examples!
General geometric series
I found a formula for a+ar+ar^{2}+...ar^{K}, the
K^{th} partial sum. I think it was
(aar^{K+1})/(1r). Therefore if K>infinity, and r<1,
we get convergence if r<1. The sum is a/(1r) if the first term is a.
Integral test
If f(x) is a positive decreasing function, then the series
SUM_{n=1}^{infinity}f(n) and the
improper integral
_{anywhere}^{infinity}f(x) dx both converge and diverge together.
Also, the verification of this statement provides a way of getting (imprecise!) numerical estimates of tails and partial sums.
General pseries:
SUM_{n=1}^{infinity}1/n^{p}
Convergence for p>1, divergence otherwise.
Numerical examples
I also asked, what K should one take to be sure that the K^{th} partial sum, SUM_{n=1}^{K}1/[3^{n}+sqrt(n)] is within, say, 10^{5} of the whole sum? Well, the tail, what we're leaving out, is just SUM_{n=K+1}^{infinity}1/[3^{n}+sqrt(n)]. This is termwise less than the series SUM_{n=K+1}^{infinity}1/3^{n}, a convergent series whose sum is (a=1/3^{K+1} and r=1/3 so 1r=2/3) a/(1r)=(1/2)3^{K}. A little experimentation shows that this is occurs when K at least 10. Therefore I bet
> add(1/(3^n+sqrt(n)),n=1..10); 856507 1 1 1 1 1 1  +  +  +  +  +  +  3267876 1/2 1/2 1/2 1/2 1/2 1/2 9 + 2 27 + 3 243 + 5 729 + 6 2187 + 7 6561 + 2 2 1 +  1/2 59049 + 10 > evalf(%); 0.3989967724that, to five decimal places, the sum is .39899.
Sequences,
series11.1: 2, 5, 6, 12, 13, 15, 18, 21, 26, 32, 34, 45, 46, 61,64
11.2: 11, 14, 17, 18, 21, 22, 27, 38, 41, 44, 49Integral tests, estimates
11.3: 3, 7, 9, 13, 16, 21, 28, 31, 34
Rates of growth, in heaven, on earth, and below
Several students wrote to me more or less asking about rates of growth
of functions. Here is an excerpt from my response, somewhat quirky, of
course, but also serious.
*****
I think intuition is extremely useful. It is also something that can
be informed and improved. Most people who study the behavior of
functions and how they grow probably have intuition sort of like this
(and I'll begin with a silly metaphor first):
The world is made up of a hierarchy (spelling?) of demons and humans and angels. All the demons are less than the humans and all the humans are less than the angels. The "internal" arrangements of {demonhumanangel} society are quite complex, but between the societies things are rather simple.
Now onto functions and growth of functions, if you can stop giggling. Let's think about polynomials: x^2 and .002x^3 and sqrt(5)x^9+98x^10. Polynomials are nice and I think maybe I can almost understand them. They are all a sum of monomials multiplied by constants. As x>+infinity, what matters is the highest degree term with a positive coefficient, and what matters if two polynomials have a highest degree term of the same degree is what the coefficient is.
Now polynomials are human. What are angels? An angel is a sum of constants multiplying exponentials with constants. So angels are 2e^{3x} and 5e^{.0003x}+99e^{99x}. These functions also have rates of growth as x>+infinity, and we can compare two of them in a similar fashion, only here the comparison is first look for a positive coefficient multiplying an exponential with a positive growth number. So .99e^{.03x} is eventually bigger than 9999999999e^{.0003x}.
How are polynomials and exponentials related? Let me stick to things with positive coefficients. A very tiny exponential, say .00001e^{.00000000001x}, compared to a huge polynomial, say 10,000,000,000x^{100,000,000,000}, is bigger, as x>infinity. Eventually, all angels outrank humans.
Now, continuing in our development of function growth via analogy and idiotic metaphor, let's consider polynomials of log functions: these are functions like 33(ln(x))^{30} and sums of them. Well, these are the demons. EVERY demon is eventually less than EVERY human.
Let me "compare" P(x)=33(ln(x))^{300} with, say, Q(x)=x^{.0001}. Poor Q(x) is a very weakly growing human, as x>+infinity. And, wow, P(x) is rather a strong demon. Indeed, P(10) is about 1.5 times 10 to the 110th power, and Q(10) is about 1.00023. But let me investigate their "ultimate strength". The simplest way is to consider the limit of P(x)/Q(x) as x>+infinity. Certainly this is a limit of the form infinity/infinity. so I should L'Hop the whole mess. If I do, the result seems to be:
Top=33(300)(ln(x))^{299}(1/x).
Bottom=.0001x^{.9999}
I hope I did this correctly. Now let's do some algebra to this quotient. I will put all of the x powers downstairs, and push the constants out to the front. So the result is (I hope):
(ugly constant) 33(300)/.0001 multiplying (ln(x))^{299} divided by x^{.0001}.
Essentially all I have done is lowered the degree of the demon by 1, and I still want the limit as x>infinity. I hope you can convince yourself that eventually (after another 299 L'Hops?) that the limit will be 0. This "miserable" human (?) eventually defeats a very powerful demon. It may take a while but this really really happens. For example, I bet (I just experimented in another window of my computer!) that if x is greater than 10^8, P(x) is LESS THAN Q(x). If you object that 10^8 is large, my response will be that there is just as much "room" between 10^8 and infinity and there is between, say, 17 and infinity. And scale of action doesn't matter to demons and humans and angels, only what EVENTUALLY happens.
Sigh. I hope this does help you. It may be more than you want to know but it really is more or less the mathematical truth. The metaphor is just there to help. People who study theoretical computer science and algorithms really worry about the growth of functions. They have the families of functions we have just discussed (called Exp and Poly and Log) but also many others. Sigh. You can look at their zoo if you like, to check that I'm not kidding: the complexity zoo.
Hes, we already know all about this, but frequently the more ways you
know to analyze even simple situations, the better off you will be. So
here I would like to compute the solution. If you really think
about the word compute, well, the "ground floor" of computation
is arithmetic. And the only functions I can compute with some
assurance are, maybe, polynomials. So suppose that I assume
this initial value problem has a polynomial solution:
y=A+Bx+Cx^{2}+Dx^{3}+Ex^{4}+....
Let us try to translate the information the differential equation and
the initial condition give. So:
dy/dx=y means
d/dx(A+Bx+Cx^{2}+Dx^{3}+Ex^{4}+....)=A+Bx+Cx^{2}+Dx^{3}+Ex^{4}+....
so if we "line up" the coefficients on both sides, we seem to get:
B=A from the constant terms
2C=B from the x terms
3D=C from the x^{2} terms
4E=D from the x^{3} terms
Etc. forever.
The y(0)=1 means that A=1.
Now feed A=1 through the previous collection of equations. We are
forced to assitgn these values:
A=1
B=1
C=1/2
D=1/6
E=1/24
Etc. Indeed, if you are a bit attentive, you can see that the
coefficient of x^{n} will be 1/n! (that's n factorial, again, the
product of the integers from 1 up to n). I will "understand" that 0!
will be 1, a special case of the notation.
(Uncomfortable!) Conclusion
The solution of the initial value problem
dy/dx=y
y(0)=1
is the infinite degree polynomial
1+1x+(1/2)x^{2}+(1/6)x^{2}+(1/24)x^{3}+...+(1/n!)x^{n}+...
but I have a hard time understand how to compute such a
thing. I certainly can't add up an infinite number of terms, because
that is impossible (let's see, at one sum a second, adding up an
infinite number of terms would take ... eternity ... although I like to think positive that is a bit daunting!).
So we will need to analyze this situation carefully. And we will.
Homework?
Wonderful Mr. Scheinberg answered questions.
Definition time
An infinite series is a "formal" infinite sum. So, for example,
Dichotomy
If a series has nonnegative terms, then ...
Examples
(17n^{5}+4n^{2}+8)^{1/n}
Another sandwich:
Limits depend only on tails
1.367879478, 0.6353353374, 0.3831204465, 0.2683156682, 0.2067379638, 0.1691454278, 0.1437690293, 0.1253354648, 0.1112345219, 0.1000454004It surely looks "good", controlled, maybe convergent, etc. Here is the 100^{th} term:
5 0.1000000000 10This is a sort of tiny number. But here is the 100,000,000^{th} term:
390865034 0.5163291506 10which is quite a large number.
Interlude: geometry, axioms, logic, truth, beauty ...
http://mathworld.wolfram.com/ParallelPostulate.html
Calculators, graphs, etc., with rational numbers only
The Intermediate Value Theorem fails: our eyes deceive us!
The additional assumption
An iteration
Consider the function f(x)=sqrt(4x+73). This is a fairly simple
function. We will use it to define a sequence recursively:
x_{1}=1
x_{n+1}=f(x_{n})
Here are the first ten terms of the sequence:
1.000000000, 8.774964387, 10.39710814, 10.70459867, 10.76189550, 10.77253833, 10.77451406, 10.77488080, 10.77494887, 10.77496151This sure looks (and smells?) like a convergent sequence. But you can't really tell. Only the infinite tail matters, and I can't directly "access" the infinite tail.
The harmonic numbers revisited
The numbers, {H_{n}}, were previously
defined. Here is another way to analyze them, by comparing them
to areas under the curve y=1/x.
Dichotomy of monotone sequences
Our word, dichotomy, has as its first definition,
a division into two, esp[ecially]. a sharply defined one.
That's certainly what I want here. Remember that a monotone sequence
is a sequence which is either increasing or decreasing. So the
sequence {(1)^{n}} which alternates between +1 and 1 is not
monotone.
Either a monotone sequence is bounded and converges
or it is unbounded and diverges.
Example (trap rule approximations)
Suppose I was (unfortunately) interested in computing something like
_{0}^{2}cos(x^{7})/(1+x^{4}) dx. No
one knows an antiderivative of this function in terms of familiar
functions. I could define a sequence in the following way:
T_{n} is the result of using the trapezoid rule approximation
for the function cos(x^{7})/(1+x^{4}) on
the interval [0,2], when the interval is divided into n equal parts.
I certainly wouldn't want to compute this, but I (or my silicon
pals) could do it, if necessary. In addition, due to the
extravagant work we did earlier this semester, I know that if I had
to, I could find a number Q so that (if the letter I represents the
true value of the integral), then
T_{n}I<Q/n_{2}. It wouldn't be fun to compute a
value of Q but we could do it.
I bet that the sequence {T_{n}} converges and that the limit
of this sequence is the value of I.
Example (decimal approximations to sqrt(2)
1^{2} is less than 2 and 2^{2} is greater than 2. So I'll start with 1.
1.4^{2} is less than 2 and 1.5^{2} is greater than
2. So my next term will be 1.4.
1.41^{2} is less than 2 and 1.42^{2} is greater than
2. So my next term will be 1.41.
1.414^{2} is less than 2 and 1.415^{2} is greater than
2. So my next term will be 1.414.
1.4142^{2} is less than 2 and 1.4143^{2} is greater
than 2. So my next term will be 1.4142.
Etc. (Etc. here means, uhhh, it is probably difficult to describe the
whole process exactly.) But I bet if q_{n} is the
n^{th} term in this sequence,
q_{n}sqrt(2)<10^{n+1}. (I think I wrote
10^{n} in class but isn't that wrong?)
I bet that the sequence {q_{n}} converges and that the limit
of this sequence is sqrt(2).
Convergence
A sequence converges (roughly) if there is a number that it gets close
to and stays close to. That number is called the limit of the
sequence. A precise definition is near the bottom of p.703 of the
text.
Sequences don't have to converge
Just look at {(1)^{n}} which flips back and forth from +1 to
1. It does not get close and stay close to any one number, so this
sequence does not converge.
Facts about limits of sequences
There are a bunch of facts about limits and sequences which you should
know. All of these facts can be proved, and none require any advanced
techniques besides a great deal of concentration and patience. We
usually verify these facts in Math
311 which some of you may want to take. But right now, I'd just
like you to agree that they are probably true.
Algebraic facts
Order facts
Warning! Subtlety coming!!!
There's one more limit fact that I will recite next time which is
somewhat more subtle than all of these.
Sequences can be defined by formulas
This is probably the most familiar way.
r^{n}
Suppose r is between 0 and 1. Then {r^{n} converges, and the
limit of this sequence is 0.
Why? Well, r^{n}=e^{n ln(r)}. Now ln(r) is
a negative number since r is between 0 and 1. Multiplying this
negative number by n as n>infinity makes it more negative. In fact,
n ln(r)>infinity. But then
e^{n ln(r)}>e^{infinity} and that's 0 (if you
have a picture of y=e^{x} in your head!).
a^{1/n}
Suppose a is a positive number. Then the sequence {a^{1/n}}
converges, and its limit is always 1.
Why? Well, a^{1/n}=e^{(1/n)ln(a)} and I know
that (1/n)ln(a)>0 as n>infinity. So
a^{1/n}>e^{0}=1 as n>infinity.
Comment I really don't think this is totally obvious. Here
are the 1/n^{th} powers of 2 as n goes from 1 to 10:
2.000000000, 1.414213562, 1.259921050, 1.189207115, 1.148698355, 1.122462048, 1.104089514, 1.090507733, 1.080059739, 1.071773463These numbers go down to 1.
0.3333333333, 0.5773502692, 0.6933612743, 0.7598356856, 0.8027415617, 0.8326831776, 0.8547513999, 0.8716855429, 0.8850881521, 0.8959584598These numbers go up to 1.
n^{1/n}
Huh: the n's in the base grow and the 1/n^{th} powers
push things down. I don't think the "winner" is clear, even though most students did. Here are the first 20 terms:
1.000000000, 1.414213562, 1.442249570, 1.414213562, 1.379729661, 1.348006155, 1.320469248, 1.296839555, 1.276518007, 1.258925412, 1.243575228, 1.230075506, 1.218114044, 1.207442027, 1.197860058, 1.189207115, 1.181352075, 1.174187253, 1.167623484, 1.161586350and maybe even now this isn't totally "clear". But, in fact, {n^{1/n}} does converge, and its limit is 1.
(3^{n}+7^{n})^{1/n}
I asked if this sequence converged. Here's some numerical evidence, the first 10 terms:
10.00000000, 7.615773106, 7.179054352, 7.058305379, 7.020125508, 7.007210537, 7.002652582, 7.000995354, 7.000379289, 7.000146315So I guess that this sequence converges and the limit is 7. Well, that's true. Why? I will use the Squeeze result carefully. Certainly
Sequences can be defined recursively
Many sequences of great importance in science and engineering are
defined recursively. This means that terms of the sequence are
defined interms of previous terms of the sequence. Let me discuss some
examples.
Sqrt(2) by Newton's method
Newton's method is a way of replacing a guess at a solution of f(x)=0
by an improved guess, obtained by "sliding down" a tangent line. It is
discussed in section 4.9 of the textbook. Here let me show a way to
approximate sqrt(2). I'll take f(x)=x^{2}2. Then sqrt(2) is
the postive root of this function. The iteration or recursion step
describes how to go from a guess, x_{n}, to a new guess,
x_{n+1}. Let me tell you what the coordinates of the points in
the picture to the right are. First, A is the point (sqrt(2),0). C has
coordinates (x_{n},0) (the old guess, which is to be
improved). D is the point (x_{n},x_{n}^{2}2)
on the curve y=f(x). The line tangent to the curve will have slope
f'(x_{n})=2x_{n}. Therefore, since the slope
represents the tangent of the angle CBD, it must be the ratio of the
geoemetric lengths DC/BC. But DC is x_{n}^{2}2, and
so 2x_{n}=(x_{n}^{2}2)/BC, and the length of
BC is (x_{n}^{2}2)/(2x_{n}). The coordinates
of the point B are supposed to be (x_{n+1},0), and
x_{n+1} is the improved guess. So x_{n+1} will be
x_{n} with the length of BC subtracted from x_{n}: the guess moves backwards. Therefore
x_{n+1}=x_{n}((x_{n}^{2}2)/(2x_{n}))
The reason I'm going through this is that this specific formula
simplifies in a remarkable way.
x _{n}^{2}2 2x_{n}^{2}(2x_{n}^{2}2) 2x_{n}^{2}+2 1 ( 2 ) x_{n}  =  =  =  (x_{n}+) 2x_{n} 2x_{n} 2x_{n} 2 ( x_{n})So we replace x_{n} by the average of x_{n} and 2/x_{n}.
x_{2}=3/2 17 x_{3}= 12 577 x_{4}= 408 665857 x_{5}= 470832 886731088897 x_{6}= 627013566048 1572584048032918633353217 x_{7}= 1111984844349868137938112You should understand that the values quoted are exact values. A decimal approximation of the last number is 1.414213562, which happens to agree with Maple's 10 digit approximation to sqrt(2).
History
There is evidence that this averaging idea was known to "ancient
civilizations" (Egypt, India) and was used to improve approximations
to square roots. But I don't think that a nonrecursive (formula!)
method for defining the sequence is known.
Look at the recursion:
x_{n+1}=(1/2)(x_{n}+2/x_{n})
If x_{n}> a limit, L, then surely
x_{n+1}> the same limit, L, because x_{n+1} is just
the same sequence, moved one step further on. But then the recursion
x_{n+1}=(1/2)(x_{n}+2/x_{n}) becomes )(as n>infinity)
L=(1/2)(L+2/L) and this is 2L=L+2/L which is L=2/L which is L^{2}=2
which is L=+/sqrt(2). In fact, the limit should be +sqrt(2)
since we started at x_{1}=1>0, and the averaging process
keeps all terms positive.
Therefore ...
Have we proved that the sequence converges to sqrt(2)?
Actually, no. We showed (the logic is important!) that if the
sequence converges, then its limit is sqrt(2). Theory and
practical application are full of examples where the "if" part is
missing or untrue. Students should recognize this.
Another recursive sequence
We could define the sequence by
x_{1}=1
x_{n+1}=(n+1)x_{n}
Well, since x_{1}=1, x_{2}=(2)(1)=2,
x_{3}=3(2)=6, x_{4}=4(6)=24, etc. In fact,
x_{n}=n!, but the notation is really a description, not
a formula. I don't know any nice "formula" for n! (except for itself).
Yet another recursive sequence
Try
x_{1}=1
x_{n+1}=(n+1)+x_{n}
Here the sequence looks like 1, 1+2=3, 1+2+3=6, 1+2+3+4=10, etc. In
this case, a formula can be guessed. It seems that x_{n}
should be (n{n+1})/2. I will call y_{n}=(n{n+1})/2. If
you plug in n+1 and n=2 and n=3 and n=4, you'll get the numbers
y_{1}=1 and y_{2}=3 and y_{3}=6 and
y_{4}=10. But why should x_{10,000} be equal to
y_{10,000}? Why should all of the infinitely many
equations x_{n}=y_{n} be true?
A proof strategy
We could think of a very long row of dominos standing on their narrow
end. The dominos are close enough together so that if one falls, the next one falls
over. I bet that if the first one
is pushed
then they will all fall over.
the first one is pushed ... This is just the observation that x_{1}=1 and y_{1}=1. 

if one falls, the next one falls
over. Suppose x_{n}=y_{n}. Then y_{n+1}=(using the formula!) ([n+1]{n+1 +1})/2=([n+1]{n+2})/2= ([n+1]{n}+[n+1]2)/2=([n+1]{n})/2 + ([n+1]2)/2= y_{n}+[n+1]. But we assumed that x_{n}=y_{n} so this means y_{n+1}=x_{n}+[n+1]=x_{n+1}. 
Much of this sort of verification can now be done by computer. One of the world leaders in this endeavor is Professor Doron Zeilberger, a faculty member at Rutgers. He is very approachable, and he is really smart.
The harmonic numbers
Here is a sequence which occurs in many applications. The
n^{th} harmonic number is the sum 1+1/2+1/3+1/4+...+1/n. This
can be abbreviated by SUM_{k=1}^{n}1/k. Here k
is called the index of summation, and I can't see k
(logically!) outside of the SUM sign. It is logically just as
inaccessible as the integration variable in _{1}^{x}1/t dt=_{1}^{x}1/q dq=_{1}^{x}1/w dw.
Maybe you might find the harmonic numbers more appealing if I wrote
the definition recursively:
H_{1}=1
H_{n+1}=[1/(n+1)]+H_{n}
The text Concrete Mathematics: A Foundation for Computer
Science by Graham, Knuth, and Patashnik shows that the harmonic
numbers are related to stacking books over an edge. Please see the
pictures displayed here
and a more complete explanation, while available in the text cited, is
also here.
What can one say about the harmonic numbers? Do they converge? Here are the first 20:
1.000000000, 1.500000000, 1.833333333, 2.083333333, 2.283333333, 2.450000000, 2.592857143, 2.717857143, 2.828968254, 2.928968254, 3.019877345, 3.103210678, 3.180133755, 3.251562326, 3.318228993, 3.380728993, 3.439552522, 3.495108078, 3.547739657, 3.597739657I can't see anything clearly from this. But look at H_{8}:
1 +1/2 +1/3+1/4 +1/5+1/6+1/7+1/8Underestimate each piece:
1 +1/2 +1/4+1/4 +1/8+1/8+1/8+1/8Sum:
1 +1/2 2(1/4)=1/2 4(1/8)=1/2So that H_{8}>=1+3(1/2). I hope this persuades you (a proof can be done by mathematical induction, or I'll show you another way next time) that H_{2m}>=1+m(1/2). First this shows that the sequence of harmonic numbers grows without any upper bound so that it can't converge. And, wow, this estimate grows very slowly. For example, to be sure that a harmonic number is greater than 100 using this estimate, I would need to look at H_{2198}: that is, I'd need to add up about 10^{60} terms! That's a lot of terms.
I will investigate this more Wednesday.
Lines
Two curves intersect orthogonally at a point if their tangent lines at that point are perpendicular.
Curves
 

Two families of curves are orthogonal if every member of one family is orthogonal to every member of the other family.
This may be pretty but should any engineer care about it?
A very dilute bit of physics: heating a thin metal plate
Orthogonal families of curves occur in many applications.
Perhaps the
simplest occurence to explain is the temperature of ideal steady state
heat distributions on thin homogeneous plates. So think of a thin
plate with some heat distribution on the edge. The heat is supplied in
such a way that if we measure the temperature distribution at any
point inside the plate, that temperature is always the
same. This is what's called a steadystate temperature
distribution.
The blue stuff is supposed to be ice cubes, and
cold.
The red stuff is supposed to be flames, and
hot.
This is romantic art in support of
mathematics instruction. I hope you
appreciate it.
Suppose I give you a heat distribution on the edge of the plate, as
shown. The red stuff is supposed to be flames, and the blue stuff is
supposed to be ice cubes. If I let time pass and keep the ice cubes
and flames on the edge, maybe you can see that eventually the
temperature distribution inside the plate will stabilize, and we will
get a steady state temperature distribution. Maybe you can see the
heat flow curves ("flux") and the isothermals. It is not
obvious that these families of curves are orthogonal, but this
actually is true!
Families of orthogonal curves also arise in other applications,
prominently in, say, electricity and magnetism.
An example
(dy/dx)x^{2}2xy  (x^{2})^{2}This should be 0, so (if we multiply by x^{4}), we get dy/dx(x^{2})2xy=0, and then dy/dx=2y/x. Now if we want curves which are orthogonal to these curves, we need dy/dx to be the negative reciprocal of this. The orthogonal curves must satisfy the differential equation dy/dx=x/2y.
Another way to get the differential equation of the family of
parabolas
If y=Cx^{2}, then dy/dx=2Cx. But
C=y/x^{2}, so that dy/dx=2Cx=2[y/x^{2}]x
and therefore dy/dx=2y/x, just as we had above.
And we ended by ...
breaking up into small groups and working on the limit problems. It
was fun!!!
HOMEWORK
I'd like to make
sure that we are all up to speed on methods of evaluating
limits, because this will be very useful in the remaining part of the
course. So please hand in these problems
on Monday.
Here are pieces of the 152 syllabus which I'd like you to look at, and
I hope that Mr. Scheinberg will be able
to answer questions about them on Thursday.
Arc length,
surface area
8.1: 3, 8, 11, 34
8.2: 1, 4, 5, 6, 14, 31Differential equations,
direction fields9.1: 1, 3, 4, 6, 9, 10
9.2: 1, 3, 4, 5, 6, 9, 11Separable equations;
exponential growth
9.3: 1, 4, 19, 20, 21, 37, 39
9.4: 3, 4, 5, 9, 10, 14
Basic Theorem about solutions of differential equations
If f(x,y) is a differentiable function of x and y and if
(x_{0},y_{0}) is in the domain of f(x,y), then the
initial value problem y´=f(x,y) satisfying
y(x_{0})=y_{0} has exactly one solution.
Beloved question BC4 (free response question #4 on the 2005 AP
calculus BC exam) dealt with f(x,y)=2xy and asked questions about the
slope field and Euler's method.
When I first learned of the Basic
Theorem, as I call it above, I thought that it sort of handled all
possible difficulties related to differential equations. I couldn't
really understand the complexities of trying to actually use
mathematics in real applications. The proof of this result
could probably be explained to you by the end of this course. For
example, one way to get the solution mentioned is by using Euler's
method (that's a rather slow way, but it works). What kinds of
reasonable questions are not answered by this theorem? Let me show
you.
Example 1 Suppose I know that
y´(x)=e^{(x2)}. Well, here is the
solution which is guaranteed by the theorem:
F(x)=_{0}^{x}e^{(w2)}dw+C. The
Fundamental Theorem of Calculus says that
F'(x)=e^{(x2)} and the "+C" allows adjustment for
an initial condition. In fact, we've just rewritten the differential
equation and transferred the computational "responsibility" to the
definite integral. As I mentioned back when we were first starting
methods of antidifferentiation, there's no way to find an
explicit antiderivative of this function in terms of standard
functions. Evaluation must be done through an approximation
technique. So learning about the solution (that it exists, that there
is exactly one) has not helped at all.
Example 2 I think this equation has more subtle aspects. Let's
look at y´(x)=xy^{2}. This equation looks "easy". It is
certainly separable, and the f(x,y) seems rather simple. We separate
variables and integrate: so dy/y^{2}=x dx and then
1/y=(1/2)x^{2}+C and we can even solve for y in terms of x:
y=1/(C{1/2}x^{2}). (I renamed C.) Now suppose we want the
solution to satisfy the initial condition: when x=0, y=8. Then C
should be 1/8, and the solution looks like
y=1/({1/8}{1/2}x^{2}). This is o.k., a nice function. A
picture is to the right (with units on the horizontal and vertical
axes are rather different). We get into some trouble when we think
about the solution curve. Suppose the curve represents the growth of
"something" (I think I suggested "wahoonies" in class). Usually x
represents the "independent" variable, say, time. What happens to the
growth of the wahoonies? Look at {1/8}{1/2}x^{2} from x=0
(where it is 1/8) up to ... well, this should not be 0 because it is
in the denominator. It will be 0 when x=1/2 (also, of course,
1/2). The growth gets larger and large, and, finally, explodes (!?)
at 1/2 (also, of course, backwards at 1/2). We can't predict the
population of wahoonies on further than 1/2 or further backwards than
1/2. This disaster is, to me, unforseen in the nice function
xy^{2}. Things are actually worse than this.
If we change the
initial condition to, say, (0,1/80,000), then the unique solution is
y=1/({1/80,000}{1/2}x^{2}) and the domain of this function is
only (1/200,+1/200): the wahoonie explosion is even closer in
time. Another picture is shown, with even more distorted axes, but the
idea is, I hope, clear: a narrower steeper curve with a higher up
initial condition. As the initial number of wahoonies grows, the
explosion comes closer and closer in time.
How to "solve" ODE's
What these two examples, and others, can tell you about the famous
Existence and Uniqueness Theorem for ODE's (a version is quoted
above as the "Basic Theorem") is not that the theorem is wrong or a
fraud, but more that it declares only what is there. I know that I've
had a tendency to "read into" the theorem a heck of a lot more than it
contains. The two examples I showed are an effort to convince me
(again!) and maybe you that solutions may not be effectively
computable, and that the solutions are guaranteed only to "live" for a
short time, and more than that cannot be assumed.
Solving ODE's can occur on various levels.
Exact solution of a logistic equation
Here I'll look at y'(t)=y(1y). We're supposed to think that the rate
of change is directly proportional to the population y(t) at time
time, but is also limited by the amount of resources available. Here
the resource limit (carrying capacity?) is 1. Of course I have made
the numbers simple so that someone essentially even more simple can
analyze the equation easily. To solve this equation (in the
sense of the first level above) I recognize it as a separable equation
and write
dy  = dx y(1y)and then (partial fractions!) recognize that 1/[y(1y)]={1/y}+{1/(1y)} where the advantage is that I can antidifferentiate the pieces. So ln(y)ln(1y)=x+C (I forgot the minus sign one when I was analyzing this in preparation for the lecture!) so that ln(y/(1y))=x+C and (renaming K=e^{C}) y/(1y)=Ke^{t} so that y=(1y)Ke^{t}=Ke^{t}Ke^{t} and y+Ke^{t}=Ke^{t} and (1+Ke^{t})y=Ke^{t}. Finally,
Well, how can we use this formula? I can try to match an initial
condition, and then I can look at a solution curve. In class I used
(0,3/4) (so K was "clearly" 3) but let me be more adventurous
here. When t=0 in Ke^{t}/(1+Ke^{t}) we get K/(1+K).
If we want K/(1+K) to be .37, and approximate value of K is .587 (yes,
a silicon pal helped me here). And my same pal graphed the solution
curve shown to the right. It sure looks like the curve is an
increasing function of time, and to the right it approaches +1 and to
the left it approaches 0. You can check these limits using the
algebraic formula:
y=.587e^{t}/(1+.587e^{t})
As t>+infinity, the quotient is always less than 1 (the bottom is
larger than the top!) but the 1 becomes less and less significant,
because e^{x}>infinity. If you would like a bit more
algebra, consider that (factor out the exp!)
.587e^{t}/(1+.587e^{t})=1/([1/.587]e^{t}+1)
as e^{t}>0 as t>+infinity.
Similarly, e^{t}>0 as t>infinity, so that
y>0 then.
One problem is that in real applications, we seldom know exact initial conditions, and one thing that differential equations "teach" is there may be what's called sensitivity to initial conditions. What happens if I change the initial condition from .37 to .38? How certain is it that y(t) will still>1 as t>infinity? A qualitative study may give such information easily.
Slope field analysis of a logistic
equation  
Look at the differential equation: y'(t)=y(1y). If a solution curve of this equation passes through the point (2,3), then the slope of the curve must be (3)(2)=6. The curve must be tangent to a line of slope 6 passing through (2,3). We could therefore {thinkhope} that the curve will look a bit like that line near (2,3): we can draw a short line segment of slope 6 through the point (2,3). We can try to draw lots of these little line segments. This is called a direction field (textbook) or a slope field (what I will call it here). Maple's command, dfieldplot (included in the package DETools) produced the collection of green arrows here. I am not too familiar with the plethora (!) of options of dfieldplot, so I took the default settings, which produce "line field" elements with arrow heads. I superimposed the solution curve we know already. I hope you can see that it is tangent to the slope field elements shown.  
Here's a picture with more of a qualitative point of view. I drew the slope field at integers and halfintegers froom 3 to 3 in both the horizontal and vertical directions. Then I tried to understand and sketch what would happen with various initial conditions at time 0. There are two special solution curves which happen when y(1y)=0. The two constants identified, 0 and 1, are graphs of horizontal lines which satisfy the differential equation. These constants make the righthand signs equal to 0, and, since the functions are constants, the derivatives of the functions are 0. These are called equilibrium solutions of the differential equation. 
The initial conditions  

The initial conditions in interval A on the yaxis are in magenta. Forward evolution pulls the corresponding solution curves all towards the equilibrium condition y=1. Backwards, they seem to approach y=0.  The initial conditions in region B seem to evolve as t>+infinity towards again the equilibrium solution y=1. As time goes backwards, these solutions explode out (the wahoonies are exploding again). The solution y=1 is a special kind of equilibrium, called a stable equilibrium. Small disturbances in initial conditions near this equilibrium lead to solutions which all approach it as t>+infinity. 
(I tried this color but it was too darn hard to
read!)Here in region C the instability of the
equilibrium solution y=0 is shown. If you do perturb the solution up
or down, eventually solution curves push away from the
equilibrium. Region C's initial conditions lead to negative infinity
type of explosions, as shown. 
Comment One thing which makes drawing the slope field and undertanding it easier is that the differential equation y'(t)=y(1y) is autonomous: there is no mention of the independent variable, t, on the righthand side. The word autonomous has the following (perhaps more common) dictionary meanings:
How are human brains built? (Maybe  a limited
comment!)
I've been told that the human brain has a terrific amount of its
capacity directed towards interpretation of visual data. Certainly I
find the slope field pictures much more convincing than looking at the
formula of the algebraic solution. And more numbers would not
necessarily help that much: I find a bunch of numbers difficult to
interpret. So that's why people like visual displays of information.
And another automomous equation
I think I looked at something like
y´(t)=(y6)^{4}(y+5)^{7}. Certainly I don't think
I could explicitly solve and get an algebra solution. But I could
easily (well, almost easily!) sketch a slope field for this
equation. It is also autonomous so I can move the slope field elements
left and right after I sketch them on one vertical line. The lines
y=5 and y=6 represent equilibrium solutions. y=5 is an unstable
equilibrium. Perturbations of it move up to 6 or out to infinity.
y=6 is a more complicated situation.
Limit manipulations
I'd like students to do these problems
and hand them in on Monday. That way I will have a firmer expectation
that we'll be able to cope with future limit problems (which will
occur!) in the course.
Return of exam
I returned the exam and asserted that everyone in the class should
stay in the class. Further remarks will be made on Thursday.
Another differential equation
Here's a simpler differential equation than the one we looked at last
time: y'(t)=Ky(t), where K is some fixed constant.
What it might mean
What are the solutions
All the solutions?
Another way to solve it
Separable equations
A modification of growth, with a carrying capacity
The logistic equation
An initial situation is specified mathematically, and then "things" evolve or change according to some wellspecified "laws" relating their interaction. Efforts of scientists and engineers were based upon this approach for several centuries. Although the approach is not sufficient to handle everything, and sometimes is not easy, it had major successes in both theoretical and practical aspects of science and engineering. In today's discussion, we will look at a very idealized model of a physical situation which might make subsequent consideration of theory easier. We will consider an ideal vibrating spring with no damping. This isn't the simplest differential equation. Some important differential equations are simpler  certainly the equations describing the motion of a rock dropping under gravity, widely considered in early study of calculus, are easier. But what happens in this example is much more characteristic of how differential equations are used than most simpler examples.
What we need to know
We need F=ma: the force on an object is directly proportional to the
rate of change of the rate of change of the position of the object:
the acceleration. The constant of proportionality is called mass. We
will discuss the an ideal spring, without damping or dissipation of
energy. One should think that the spring is floating in space, in
fact, it is alone in the universe. There's no gravity, no air
resistance, no ... anyway, the spring has a mass attached to it. If we
attached the mass very gently and at the correct length (in
equilibrium) the spring would not appear to move. However if we
push or pull the mass, or attach the mass at a position other than
equilibrium, a force seems to act on the mass. Hooke's
law states that the spring exerts a force on the object whose
direction is opposite to the object's displacement from equilibrium
and whose magnitude is directly proportional to the magnitude of the
displacement. So if x(t) is the displacement from equilibrium at time
t, the spring exerts a force of kx(t) on the object. Since also F=ma,
and a is x´´(t), the acceleration, we have the law of motion for an
undamped spring: mx´´(t)=kx(t). By the way, Hooke's law has been
experimentally verified under many conditions, as look at the spring
is not stretched or squeezed too much (don't take a rubber band
and stretch it 20 feet, for example).
The double game
I will try to play two sorts of intellectual game: I will try
to use physical "intuition" to determine what to expect about spring
motion, and I will also investigate what purely mathematical
deductions can be made. Well, first I want to make my life a little
bit easier. I'll assume that my "units" are chosen so that m=1 and
k=1. Remarks at the end will cover what happens if we don't assume
this. Now I want to solve the equation mx´´(t)=kx(t). Well,
what does "solve" the equation mean? I know if I study the polynomial
2B^{3}4B8=0 that a solution is B=2. When I substitute 2 into
the polynomial the equation becomes true. Here I have what's called a
secondorder (the unknown function appears with a maximum of two
derivatives) ordinary differential equation. The word "ordinary" is
sometimes used, because there are other sorts of differential
equations, such as partial differential equations. A solution would be
a function which could be substituted into the equation, and for which
the equation would be true for all appropriate values of t (say, t in
the domain of x(t), for example). We could try some x(t)'s. For
example, functions like x(t)=16t^{2}+9t5 occur when we drop
rocks. Then x´´(x)=32, and there are very few t's for which
32=(16t^{2}+9t5). So this x(t) is not a solution. But we
should try to use our physical intuition. The motion of a spring
should be back and forth, so certainly likely candidates should be
bounded. But, in fact, the function 16t^{2}+9t5 gets
arbitrarily large in magnitude. What should we try? Well, sin(t) was
suggested, and that works: if x(t)=sin(t), then
x´´(t)=sin(t)=x(t). And so does cos(t). Wait: so does
cos(t)+sin(t). Indeed, lots of other suggestions work:
x(t)=Acos(t)+Bsin(t) works if A and B are any constants. The equation
x´´(t)=x(t) is structurally rather nice. The sum of two solutions is
a solution, and a constant multiplying a solution is a solution. (This
is called, in mathspeak, linearity, and in engineeringspeak, the
principle of superposition). Well, mathematically A and B are
nice. But what do they say about the physics of the situation?
Initial conditions
If x(t)=Acos(t)+Bsin(t), then A=x(0) and B=x´(0). Therefore A
represents the initial position (really, displacement from
equilibrium) and B represents the initial velocity of the mass. What
do cos(t) and sin(t) represent? Since cos(0)=1 and cos´(0)=sin(0)=0,
somehow cos(t) in this situation represents and initial position
solution to x´´(t)=x(t): we could call it x_{pos}(t). And
sin(t) has sin(0)=0 and sin´(0)=cos(0)=1, an initial chunk of
velocity. And so maybe in this situtation we could call cos(t) the
solution x_{vel}(t), an initial velocity solution. If we
needed to do lots of solutions to x´´(t)=x(t) with many varied
initial positions and velocities, the formula
Ax_{pos}(t)+Bx_{vel}(t) might be useful.
Algebraic digression: simple harmonic motion
I didn't do this in class, mainly because I find algebra
painful. The algebra following is really motivated by physical
considerations, so maybe that excuses it.
Look at Acos(t)+Bsin(t). I can rewrite this in a way which some people
find more appealing. I'll multiply and divide by
sqrt(A^{2}+B^{2}). The result is
A B sqrt(A^{2}+B^{2})[·cos(t)+ ·sin(t)] sqrt(A^{2}+B^{2}) sqrt(A^{2}+B^{2})There are some funny numbers appearing: A/sqrt(sqrt(A^{2}+B^{2})) and B/sqrt(A^{2}+B^{2}). These numbers have squares which sum to 1, and therefore there is a right triangle with hypoteneuse of length=1 which has them as legs. Also therefore there is an angle, theta, so that sin(theta)=A/sqrt(A^{2}+B^{2}) and cos(theta)=B/sqrt(A^{2}+B^{2}). This is "just" from triangle geometry. Look at the picture. Then
I gave Maple A=3 and B=7, and the result, 3sin(t)+7cos(t), is plotted here. To me the fact that simple harmonic motion is the result is not totally obvious. The magnitude, sqrt(A^{2}+B^{2}), is about 7.62, and the phase angle, arctan(A/B), is about .40.
Other solutions?
I told students that I knew another solution to x´´(t)=x(t) besides
Acos(t)+Bsin(t). Being clever rascals (RASCAL: "One that is playfully
mischievous") they refused to believe me. I said, I have a secret
solution, W(t), which satisfies W´´(t)=W(t) and W(0)=3 and
W´(0)=4. They asserted that this W(t) would have to be
3cos(t)4sin(t). So we compared them. Here is one way to
compare: form a function by taking the difference, and then look at
this difference. So we defined:
C(t)=W(t)(3cos(t)4sin(t)). What do we know about C(t)? Well,
C´´(t)=C(t). This is not entirely obvious, but it comes from
subtracting the equations
W´´(t)=W(t) and (3cos(t)4sin(t))''=(3cos(t)4sin(t)).
These equations are true because both functions are solutions of the
spring motion equation. What else do we know?
C(0)=W(0)(3cos(0)4sin(0)). I assumed that W(0)=3, so C(0)=0.
Also, C´(t)=W´(t)(3cos(t)4sin(t))´ so C´(0)=0.
This shouldn't be so strange since we created C(t) to compare
solutions with the same initial velocity and position. Now C(t) is
supposed to describe the motion of a spring when the initial
displacement and the initial velocity is 0. I believe that
under those conditions the ideal spring will not move, and will not
move even forever. Why, either from the physical or
mathematical points of view, should this be so?
Conserved quantities
I know that the kinetic
energy of a mass, m, (recall, m=1 here) is
(1/2)mv^{2}. Therefore the kinetic energy of the mass on the
spring is (1/2)x´(t)^{2}. The potential
energy is equal to the amount of work which must be done to
get something into a position. Now to push the mass into a
displacement of x(t) from equilibrium, we must push against the
Hooke's law force of kx(t). But that force varies with distance. We
did some problems like this when we discussed work. What you do is
multiply kx(t) by dx, a tiny distance in which the force hardly
varies, sum, take limits, and, hey, we end up with _{0}^{x(t)}kx dx
and this is (1/2)kx(t)^{2}. Also, k should be 1. Also we lost
the minus sign because we are pushing against the spring. Wow!
The total energy of this ideal and isolated spring is the sum of the
potential and kinetic energies, so it is
TE(t)=(1/2)x´(t)^{2}+(1/2)x(t)^{2}. I wonder if this
energy changes over time?
The picture is supposed to show you the kinetic energy and the
potential energy separately. Indeed, it turns out that the total
is a constant. Read on!
The math person takes the energy and runs ...
Now forget the previous paragraph, and consider the TE(t) associated
with the function C(t) which has these properties:
C´´(t)=C(t) and C(0)=0 and C´(0)=0.
Take d/dt of TE(t)=(1/2)C´(t)^{2}+(1/2)C(t)^{2}, and
get (minding the Chain Rule!)
TE´(t)=(1/2)2C´(t)C´´(t)+(1/2)2C(t)C´(t). More than one student
noticed that this is the same as TE´(t)=C´(t)[C´´(t)+C(t)]. But the quantity
inside the []'s is 0 since the spring equation is satisfied. Therefore
TE´(t)=0 for all t (this is a version of conservation of
energy). That means TE(t) is constant. But
TE(t)=(1/2)C´(0)^{2}+(1/2)C(0)^{2}, so (using the
initial conditions we have) TE(0)=0 so that TE(t) is always 0. Hey,
that means C(t)^{2} is always 0, so C(t)=0 always, so
W(t)(3cos(t)4sin(t))=0 for all t so W(t)=3cos(t)4sin(t) and I must
have been mistaken: the W(t) I thought was different is exactly the
same as the solutions I knew before.
Therefore ...
In some sense we have a perfect Newtonian description of this very
simple system. The initial conditions of position and velocity
determine the motion of the spring for all time, using the
differential equation to describe how the spring "evolves" through
time. The key to realizing that we had all solutions and therefore had
described all of the legal motions of the system was examination of
the total energy of the system.
If I had not set k and m equal to 1 I think I would have needed to look at cos(sqrt{m/k}t) and sin(sqrt{m/k}t) as solutions of mx´´(t)=kx(t).
I also asked what would happen if we changed x´´(t)=x(t) to x´´(t)=x(t). Some suggestions were made. And, actually, e^{t} and e^{t} are solutions. And, indeed, if A and B are constants, then Ae^{t}+Be^{t} are solutions. I asked what the initial position and velocity solutions were. That is, can we find A and B so that if x(t)=Ae^{t}+Be^{t}, then x(0)=1 and x´(0)=0. With some thought, we got A=1/2 and B=1/2. Huh. The initial velocity solution, with x(0)=0 and x´(0)=1, has A=1/2 and B=1/2. Indeed. So: for this equation, x´´(t)=x(t), x_{pos}(t)=(1/2)(e^{t}+et). This is the hyperbolic cosine, called cosh(t) ("kosh of t"). And x_{vel}(t)=(1/2)(e^{t}et). This is the hyperbolic sine, called sinh(t) ("cinch of t").
Today's class
Two formulas, one for arc length and one for (lateral) surface area. I
covered this material in a very cursory fashion, because I
don't think I really have much to add to what any textbook says.
Word of the day cursory
hasty, hurried.
The definite integral computes ...
Cut apart, approximate, sum, take limits.
A formula for arc length
Testing the formula
Computational defects of the arc length formula
(Lateral) surface area
Testing the formula
Similar defects
Quote a textbook example. General comment.
Reality?
Here is a volume which can be filled with paint but which
can't be painted.
What does this mean?
Computing the force
It was recognized in class (note the passive voice!) that this
integral can be
computed with a trig substitution. But first we used the substitution Ax=t
so A dx=dt and
(x^{2}+A^{2})^{3/2})=A^{3}(t^{2}+1)^{3/2}.
The limits amazingly stay the same: as x goes from infinity to
+infinity so does t (this is one nice thing about the improper
integral). The force is (another A cancels out due to the top part of
the sine fraction) (GmK/A)_{infinity}^{infinity}(1/(t^{2}+1)^{3/2}) dt.
Now finish by computing _{infinity}^{infinity}(1/(t^{2}+1)^{3/2}) dt
(which I don't need  you may not notice but the point of the
computation is already done!). So take t=tan(w), then infinity=t and
+infinity=t become w=Pi/2 and w=Pi/2 and dt=(sec(w))^{2}dw
and (1/(t^{2}+1)^{3/2}) becomes 1/(sec(w))^{3}
and the integral becomes _{Pi/2}^{Pi/2}cos(w) dw which is 2. The
force is then (2GmK/A).
The wonderful fact is that the force between the wire and the external
mass is now inverse first power, starting from an inverse
square law of attraction. This is very neat to me. The reason why I
claimed earlier that the "point of the computation is already done" is
that I wanted to show the force was inverse first power. The
particular constant of proportionality is not as interesting to me
here.
Angstroms and molecules
Any inverse square law works the same. So if one has a tiny molecule
maybe
attracted by some force (charge, for example) to a big molecule which
is sort of straight, the computation shows that the attraction should
be approximately inverse first power. The improper integral does
closely approximate the real thing, because the "edges" (far away from
x=0) don't usually matter very much. Neat, neat, neat.
Back to Pi
My exposition of the famous Gaussian
integral is here. This integral is connected to the Central Limit
Theorem, one of the most remarkable results in probability and
statistics. Please look up this result on the web. There are many
animations demonstrating it.
The other kind of improper integral
We have been looking at improper integrals where there is a
defect in the range: the range is infinite. There are also
improper integrals where the defect is in the domain. Please remember
these integrals:
_{56}^{201}[1/x^{5}] dx=1/(4x^{4})]_{56}^{201}=1/(4{201}^{4})+1/(4{56}^{4})
_{56}^{201}[1/x^{1/5}] dx=(5/4)x^{4/5})]_{56}^{201}=(5/4){201}^{4/5})(5/4){56}^{4/5})
Now change 56 to B, where B is a positive number less than 201.
_{B}^{201}[1/x^{5}] dx=1/(4x^{4})]_{B}^{201}=1/(4{201}^{4})+1/(4{B}^{4})
_{B}^{201}[1/x^{1/5}] dx=(5/4)x^{4/5})]_{B}^{201}=(5/4){201}^{4/5})(5/4){B}^{4/5})
Now investigate what happens as B>0^{+}. The first integral, with
1/x^{5} as integrand, goes to +infinity. So we will say that
The improper integral
_{0}^{201}[1/x^{5}] dx diverges.
As B>+infinity, the value of the second integral with integrand 1/x^{1/5}, approaches (5/4){201}^{4/5}). So we will say that
The improper integral
_{0}^{201}[1/x^{1/5}] dx converges and its value is (5/4){201}^{4/5}).
A failure of physical theory?
The repulsion between two protons is inverse square. So how much work
would be required to "push" two protons together to form another atom
with two protons in its nucleus? If we compute this work using the
improper integral that simple theory would require, then we'd have
something like _{0}^{someplace}[1/x^{2}] dx and
this integral would diverge: an infinite amount of work would be
required. So maybe we need to change theories. Maybe the neutrons
"mediate" in some way, or maybe the inverse square law of attraction
breaks down at really small (nuclear) dimensions, or maybesome other
sort of theory is needed. I don't know.
Inverse powers
We considered which integrals of the form _{0}^{201}[1/x^{STUFF}] dx
would converge and which would diverge. Here were the conclusions we
reached:
If STUFF<1, then the integral would converge.
If STUFF>1, then the integral would diverge.
When STUFF=1 we had to consider the integral separately,
because the antiderivative was no longer a simple power of x, but was
ln(x). We concluded that
If STUFF=1, then the integral would diverge.
This is exactly the reverse (except for the borderline case of 1) of
what happened in the other case. Also here is a picture. The picture doesn't help me much at all. Oh well. I like pictures.
Just one more integral
Let's look at _{0}^{1}ln(x) dx. I know that ln(x)>infinity
as x>0^{+}. Is there a finite amount of "area" enclosed
between the yaxis, the xaxis, and this curve?
Let's integrate by parts.
u=ln(x) du=(1/x)dx dv=dx v=xSuppose B is positive and close to 0. Then So _{B}^{1}ln(x) dx=x·ln(x)]_{B}^{1}_{B}^{1}1 dx=x·ln(x)x]_{B}^{1}. When x=1 we get just l since ln(10=0. What about B·ln(B)B? As B>0^{+}, the B term >0. But ln(B)>infinity, and B>0. Which one "wins" in the B·ln(B) computation? Well, just as exponentials go faster, logs go slower. To see that the limit of B·ln(B)>0 as B>0^{+}, I will need to rearrange things as a fraction so that we can take advantage, again, of L'Hopital's rule. So:
ln(B) B·ln(B)=  1/BAs B>0^{+} this has the infinity/infinity form, so its limit (by L'H) will be the same as the limit of (1/B)/[1/B^{2}]=B^{2}/B=B. But that's 0. So _{0}^{1}ln(x) dx converges and its value is 1.
Next Thursday
I hope that Mr. Scheinberg will discuss problems from
sections 7.4, 7.5, 7.7 and 7.8. progress!
HOMEWORK
Please do problems from the remainder of chapter 7.
Word of the day lackadaisical
Integral #1
_{56}^{201}[1/x^{5}] dx=1/(4x^{4})]_{56}^{201}=1/(4{201}^{4})+1/(4{56}^{4})
Please notice that this integral is positive, as it should be. I have
attempted to give a rough qualitative sketch of a region in the plane
whose area is computed by the integral.
Integral #2
_{56}^{201}[1/x^{1/5}] dx=(5/4)x^{4/5})]_{56}^{201}=(5/4){201}^{4/5})(5/4){56}^{4/5})
Please notice that this integral is positive, as it should be. I have
attempted to give a rough qualitative sketch of a region in the plane
whose area is computed by the integral.
Nasty comment department
Well, yeah, the two pictures are the same, but I emphasized
that the drawing was just a tiny qualitative help to us. Why, Ms. Johnson agreed that I accidentally
left off any marks on the vertical axis so of course both pictures are
valid.
Integral #1, again
Suppose A is a large positive number. Then
_{56}^{A}[1/x^{5}] dx=1/(4x^{4})]_{56}^{A}=1/(4{A}^{4})+1/(4{56}^{4})
Now as A>infinity, this "area" seems to approach a limit. The value
of that limit is 1/(4{56}^{4}). We will say that _{56}^{infinity}[1/x^{5}] dx
converges and that the value of this improper integral is
1/(4{56}^{4}).
Please notice that this integral is positive, as it should be. I have
attempted to give a rough qualitative sketch of a region in the plane
whose area is computed by the integral.
Integral #2, again
Suppose A is a large positive number. Then
_{56}^{A}[1/x^{1/5}] dx=(5/4)x^{4/5})]_{56}^{A}=(5/4){A}^{4/5})(5/4){56}^{4/5})
Now as A>infinity, this "area" seems to get larger. It certainly does not approach a finite limit. We will say that _{56}^{infinity}[1/x^{1/5}] dx diverges (or does not converge).
Huh?
Both of the regions from 56 "out to infinity" look maybe something
like what I've drawn. Although I love pictures, I can't tell by
looking that one of these regions "has" finite area, and the other one
does not. The phenomenon seems to be subtle.
We discussed the {condi}vergence of the following integrals (or
something like them):
Well, this all seems slightly silly. But it isn't because improper integrals arise frequently in applications and sometimes are much easier and more important to compute than standard definite integrals.
Escape ..
When I was very young, I read a science fiction novel by Robert
Heinlein which stated "... the escape velocity from the Earth is 7
miles per second ..." and now I would like to sort of verify this
using only wellknown (?) facts and some big ideas of physics.
Inverse powers
We considered which integrals of the form _{56}^{infinity}[1/x^{STUFF}] dx
would converge and which would diverge. Here were the conclusions we
reached:
If STUFF>1, then the integral would converge.
If STUFF<1, then the integral would diverge.
When STUFF=1 we had to consider the integral separately,
because the antiderivative was no longer a simple power of x, but was
ln(x). We concluded that
If STUFF=1, then the integral would diverge.
Exponential decay versus polynomial growth
I asked which other improper integrals might converge. Well,
exponential decay was suggested. So, of course _{0}^{infinity}e^{x}dx converges. We
computed this as the limit of _{0}^{A}e^{x}dx as A>infinity, and
we got 1 as the answer. I wondered if, say, _{0}^{infinity}x^{56}e^{x}dx
converged?
Thinking about how x^{56}e^{x} behaves when x gets large leads to the question: which of x^{56} and e^{x} gets bigger faster? So I answer this:
lim x^{56} lim 56x^{55} [ After ] lim const x>inf  =(L'hop) x>inf  = [ several ]= x>inf  e^{x} e^{x} [uses of L'H] e^{x}So this limit must be 0. This is important. For example, consider the function f(x)=x^{3,124}e^{.00002x}. Here there is an enormous power of x, multiplied by a very slightly (?) decreasing exponential. Let me call upon something that can compute better than I can:
> f:=x>x^3124*exp(.00002*x); 3124 f := x > x exp(0.00002 x) > f(10); 3124 0.9998000200 10 > f(100); 6248 0.9980019987 10 > f(10^10); 55618 0.1269460960 10So f(10) is large, and f(100) is even larger. And f(10^{10}) is very, very, very small, indeed. In many applications, exponential growth and decay occur, and they eventually "win" over what might seem huge competition.
Some more improper integrals
We had already computed _{0}^{infinity}e^{x}dx=1. Now I
computed _{0}^{infinity}xe^{x}dx. I used (of
course) integration by parts. Here the parts were:
u = x du=dx dv = e^{x}dx v=e^{x}The boundary term is (x)(e^{x})]_{0}^{A} (as A>infinity). When x=0 this "disappears" because of the first factor. And Ae^{A}>0 because exponential decay is faster than any polynomial growth. The minus signs cancel, and we seem to see that _{0}^{infinity}xe^{x}dx converges and its value is 1.
I wonder what _{0}^{infinity}x^{n}e^{x}dx is? Are all of the values equal to 1? What's the pattern?
Go whole hog
"To engage in something without reservation or constraint"
"If you go the whole hog, you do something completely or to its
limits."
"To carry out or do something completely."
I will do all of these computations at the same time. Suppose n is a
positive integer. Define I_{n} to be
_{0}^{infinity}x^{n}e^{x}dx,
which will be a convergent improper integral. What is its value? So I
will integrate by parts.
u = x^{n} du=nx^{n1}dx dv = e^{x}dx v=e^{x}The boundary term here is (x^{n})(e^{x})]_{0}^{A} (as A>infinity). When x=0 this "disappears" (for n positive integer!) because of the first factor. And A^{n}e^{A}>0 because exponential decay is faster than any polynomial growth. The minus signs cancel, and we see that knowing that the integral I_{n1} converges implies that the integral I_{n} converges, and I_{n}=n·I_{n1}
So these integrals are ... (!!!)
We know these facts. If I_{n}=_{0}^{infinity}x^{n}e^{x}dx,
then
I_{1}=1
I_{n}=n·I_{n1}
Well, I happen to know some other numbers which obey these rules: they
are the factorials. Therefore I_{n}=n! for all n.
Officially this is a proof by a technique
called
mathematical induction.
Maple knows these integrals. Look, 120 is 5!:
> int(x^5*exp(x),x=0..infinity); 120
And therefore onehalf factorial is ...
So the integral expression _{0}^{infinity}x^{n}e^{x}dx
can be used to define n! for n's different from positive integer
n.
> int(sqrt(x)*exp(x),x=0..infinity); 1/2 Pi  2So (1/2)! is sqrt(Pi)/2. Not clearly!!!!
Maintained by greenfie@math.rutgers.edu and last modified 9/11/2005.