### Math 421 diary, fall 2005 Laplace Transforms

 Confession

### Monday, September 26

 Laplacetransforms
More vibrations, even resonance: hitting a spring again
Keep the initial conditions the same, but change the forcing function. So here look at y´´+y=delta(t-Pi)+delta(t-3Pi). Now go through the Laplace transform, using the initial conditions:
s2Y(s)-sy´(0)-y(0)+Y(s)=e-Pi·s+e-3Pi·s becomes
s2Y(s)+Y(s)=e-Pi·s+e-3Pi·s and then
(s2+1)Y(s)=e-Pi·s+e-3Pi·s
so that
Y(s)=[e-Pi·s+e-3Pi·s]/(s2+1).
The inverse Laplace transform tells me that
y(t)=sin(t-Pi)U(t-Pi)+sin(t-3Pi)U(t-3Pi).

It may be useful to understand what this algebraic mess means.

• For example, if 0<t<Pi, y(t)=0: the position of the spring doesn't change from rest before it is hit.
• If Pi<t<3Pi, then y(t)=sin(t-Pi), a sine wave "starting" when time is Pi.
• If 3Pi<t, then y(t)=sin(t-Pi)+sin(t-3Pi). We could think about what this means (actually, it is probably easier to use Maple to graph y(t) but perhaps the effort will eventually help). Well, the sine function is periodic with period 2Pi, so that y(t)=sin(t-Pi)+sin(t-3Pi)=sin(t-Pi)+sin(t-Pi)=2sin(t-Pi). Hey: the function sort of looks the same but its amplitude doubles. The two "hits" reinforced each other, and made the spring deviate from equilibrium more. This is called resonance.
I then asked the class if this was smooth motion: if I were riding the spring, would I feel any jerks (yes, it is easy to make a joke here about feeling jerks). More technically, what I am asking translates to: is y(t) a smooth (=differentiable) function. Certainly, inside the intervals 0<t<Pi and Pi<t<3Pi and 3Pi<t the description we have of y(t) is a rather simple one, and the functions involved have first (and second and third and ...) derivativates. The motion is smooth. At t=Pi, the slope of the tangent line changes from 0 (on the left) to 1 (on the right). At t=3Pi, things are a bit more difficult to see, but the slope changes from 1 (on the left) to 2 on the right: this is not smooth, and the travel of the spring certainly "jerks" there.

And even more resonance
What if the driving force hits every 2Pi after t=Pi? The mathematical model is y´´+Ky´+y=SUMj=0infinitydelta(t-(2j+1)) with, again, y(0)=0 and y´(0)=0. Let us analyze this as we've already done. Of course, we could worry about the infinite sum, which is sort of a limit, but "Almost every reasonable limit should exist." I won't worry.
Exactly as before, the Laplace transform is Y(s)=SUMj=0infinity[e-(2j+1)Pi·s]/[s2+1]. This Laplace transform already has built into it the initial conditions and the "rough" inhomogeneity of the forcing term.

The inverse Laplace transform is y(t)=SUMj=0infinitysin(t-(2j+1)Pi)U(t-(2j+1)Pi). Again please remember that sine is periodic with period 2Pi. Therefore sin(t-(2j+1)Pi)=sin(t-Pi), and y(t)=SUMj=0infinitysin(t-Pi)U(t-(2j+1)Pi)=sin(t-Pi)SUMj=0infinityU(t-(2j+1)Pi).

What's left in the sum is not a constant function, but it is a staircase function (the graph looks more or less like 4.3, #62. What about the motion? Is it smooth? Where is it smooth? The graph is a bit difficult to consider for t large, but if we ask Maple to "zoom in" near a specific t (I think this is t=3Pi) you can see a kink in the graph. At 3Pi, the slope changes from 1 to 2. And (3+2j)Pi, motion will be continuous (this spring doesn't fly apart!) but the graph will not be differentiable: slope changes from j to j+1.
This is a great vocabulary day:
kink
a. a short backward twist in wire or tubing etc. such as may cause an obstruction.
b. a tight wave in human or animal hair.
jerk
a. a sharp sudden pull, twist, twitch, start, etc.
b. a spasmodic muscular twitch.
c. (in "pl.") [Brit.] [colloq.] exercises ("physical jerks").
d. [sl.] a fool; a stupid person.

And how about limiting the amplitude
Here is a special bonus problem for mechanical engineering students, and any others who can grasp these concepts. Our ideal spring y´´+y can have a damping term: y´´+Ky´+y for some real constant K. This might correspond to the spring vibrating in honey or 10W30 motor oil or ... whatever. Suppose we hit the spring again every 2Pi. I know that the model described above is, indeed, just a classroom model. It is hard to conceive of Hooke's law applying at the spring stretches more and more and more (4 light years?). So the design question: find a value of K so that the spring is limited in motion (with, say, |y(t)|<10) for a limited time, say 0<t<200. You must give some supporting evidence that your candidate for K actually restricts the motion in the required fashion.

Please note that a math person might actually try to compute the exact solution, and might then investigate the highest value of y(t), etc. What could an engineer do? Well, maybe build a spring system (difficult and perhaps expensive). Or find some other nearly experimental way to verify that a conjectured value of K works.
Hint After class, I returned to my office, and guessed a value of K, and got good enough verification within 45 seconds, with some help from ...

QotD
I asked people to find a formula for the Laplace transform, Y(s), of the solution to the Initial Value Problem:
y´´(t)+3y´(t)+y(t)=t+delta(t)
y(0)=1 and y´(0)=-2.
How to do this: I would try to use effectively the dictionary of Laplace transforms displayed on the board. So the equation
y´´(t)+3y´(t)+y(t)=t+delta(t)
becomes:
s22Y(s)-sy(0)-y´(0)+3(sY(s)-y(0))+Y(s)=1/s+e-2s
and, inserting the initial conditions, we get:
s22Y(s)-s+2+3(sY(s)-1)+Y(s)=1/s2+e-2s
and collecting Y(s) terms:
(s2+3s+1)Y(s)-s+2-3=(1/s2)+e-2s
so that
Y(s)=[(1/s2)+e-2s+s+1]/(s2+3s+1).

By the way, amid the general hilarity, I neglected to inform you that my solution which I did half an hour before class, also had an error. Sigh. I then asked the following question, which I believe is useful to think about. Suppose we have this Laplace transform. What sorts of functions should we expect in the inverse transform? It's my hope that you would have a computer algebra system to solve the equation but you should be able to make some rough checks on the answers.

I notice that s2+3s+1 has real roots (the discriminant, 32-4·1·1=5 is positive). Therefore I would not expect sine or cosine. I would expect some sums of exponentials. I would also expect some Heaviside function entry for two reasons: there is a delta in the right-hand side, and we can only get that as a derivative of U, and, second, the e-2s in the transform, which the second translation theorem will turn into a U. Now let's see what Maple answers, assuming I can type correctly.

```> with(inttrans):
> invlaplace(((1/s^2)+exp(-2*s)+s+1)/(s^2+3*s+1),s,t);
/   1/2         1/2        1/2       \
|(-5    + 3) t 5      3 (-5    + 3) t|   1/2
|------------------ - ---------------| (5    - 3)
|    / 1/2      \       / 1/2      \ |
|    |5         |       |5         | |
|  2 |---- - 3/2|     2 |---- - 3/2| |
\    \ 2        /       \ 2        / /                      1/2
-3 + t + 1/5 exp(-------------------------------------------------) (10 + 3 5   )
/   1/2      \
|  5         |
4 |- ---- + 3/2|
\   2        /

/         1/2                       1/2              \
1/2 |       (5    + 3) (t - 2)        (5    - 3) (t - 2) |
+ 1/5 5    |-exp(- ------------------) + exp(------------------)| Heaviside(t - 2)
\               2                         2          /

/    1/2         1/2        1/2        \
|  (5    + 3) t 5       3 (5    + 3) t |    1/2
|- ----------------- - ----------------| (-5    - 3)
|    /   1/2      \      /   1/2      \|
|    |  5         |      |  5         ||
|  2 |- ---- - 3/2|    2 |- ---- - 3/2||
\    \   2        /      \   2        //                        1/2
- 1/5 exp(----------------------------------------------------) (-10 + 3 5   )
/ 1/2      \
|5         |
4 |---- + 3/2|
\ 2        /
```
What a mess, but, qualitatively, the guesses above are verified. I think. Section 4.6 in 4.6 minutes
This section uses the Laplace transform to solve systems of linear ODE's. Such systems arise in both chem and mech engineering applications! Here is problem 5 from that section:
``` dx   dy
2-- + -- -2x = 1
dt   dt

dx   dy
-- + -- -3x -3y = 2
dt   dt

x(0)=0 and y(0)=0```
This is a textbook problem, so it will not be very difficult (maybe). And this one was rehearsed (!). You may be able to see various ways to solve this problem, but let me be obedient to chapter 4, and use Laplace transforms.

Here are the Laplace transforms of the equations, with the rather silly initial conditions used.

```2sX(s)+sY(s)-2X(s)=(1/s)
sX(s)+sY(s)-3X(s)-3Y(s)=(2/s)```
Let's collect things:
```(2s-2)X(s) +     sY(s) = (1/s)
(s-3)X(s) + (s-3)Y(s) = (2/s)```
Now this is a system of two linear equations in two unknowns. I will divide the first equation by (2s-2):
`  1X(s) + (s/[2s-2])Y(s)=(1/[s[2s-2])`
and now I will multiply the modified first equation by (s-3) and subtract it from the second equation. This is resulting second equation:
`  0X(s) + ((s-3)-(s-3)(s/[2s-2]))Y(s)=(2/s)-(s-3)(1/[s[2s-2])`
Wow! Now I have isolated Y(s) and can divide by its coefficient, so that
```        (2/s)-(s-3)(1/[s[2s-2])
Y(s) = ------------------------
((s-3)-(s-3)(s/[2s-2]))```

Maple tells me this:

```> invlaplace(((2/s)-(s-3)*(1/(s*(2*s-2))))/((s-3)-(s-3)*(s/(2*s-2))),s,t);
- 1/6 + 8/3 exp(3 t) - 5/2 exp(2 t)```
which is actually the answer in the back of the book. As I mentioned in class, I had to try four times to type the input correctly, matching the parentheses. But let me show you how to find the inverse Laplace transform "by hand".

I will multiply top and bottom by s(2s-2). I will also "factor out" the s-3 in the bottom.

```       (2/s)-(s-3)(1/[s[2s-2])     2(2s-2)-(s-3)
Y(s) = ----------------------- = ------------------
((s-3)-(s-3)(s/[2s-2]))  (s-3)(s(2s-2)-s2)```
The top: 2(2s-2)-(s-3)=4s-4-s+3=3s-1.
The bottom:(s-3)[s(2s-2)-s2]=(s-3)[2s2-2s-s2]=(s-3)[s2-2s]=(s-3)(s-2)s.
Therefore
```           3s-1       A     B    C   A(s-2)s+B(s-3)s+C(s-3)(s-2)
Y(s) = ----------- = --- + --- + - = ---------------------------
(s-3)(s-2)s   s-3   s-2   s        (s-3)(s-2)s```
and 3s-1=A(s-2)s+B(s-3)s+C(s-3)(s-2). Plug in s=0 to get C=-1/6, plug in s=2 to get B=-5/2, and plug in s=3 to get A=8/3. The answer will be what the book and Maple predicted.

I could get X(s) by putting the complicated rational function description of Y(s) in the equation 1X(s) + (s/[2s-2])Y(s)=(1/[s[2s-2]) and then solving for X(s). I am lazy and won't do this. Last year I discussed a simpler example.

What's going on?
Those students who have some background in linear algebra might see some sort of pattern. I was doing row reduction in the system of linear equations given for X(s) and Y(s). I was thinking of various functions involving s as the scalars (!). My goal was to get 1 and 0 as scalars.

Linear algebra ...
Linear algebra is a subject which involves both scalars and vectors. Let me tell you about both of them, from an "operational" point of view.

• Scalars are sort of the adjectives, while vectors are the nouns. I will expect to add, subtract, multiply and divide scalars. The most immediate examples of scalars which come to mind are the real numbers and the complex numbers. But even the most applied engineer, if such a person exists, will be helped by acknowledging that other collections of useful scalars occur. Please see below.
• Vectors are creatures which can be multiplied by scalars. I can add and subtract vectors. The whole collection of vectors in a problem under consideration is called a vector space. Questions about linear independence and dimension, etc., are applied to vectors and vector spaces. The simplest collections of vectors are Rn and Cn. These are n-tuples of real and complex numbers.
• Geometry and mechanics
Here vectors are forces and directed line segments in R2 (the plane) and R3 (space). These objects may represent forces or fluid flow or heat flow or ... and we add them and multiply them by scalars (in this case, just real numbers). Adding them gives resultants and various components of the vectors have names like flux. We can draw nice simple pictures whose geometry is frequently appealing.
• Vectors in ODE's
Here the vectors are n-tuples of real or complex numbers. They arise as part of the effort to solve ODE's efficiently. Complex numbers are introduced because diagonalizing matrices allows efficient computation of various things, and you can diagonalize matrices without using complex numbers.
• Laplace transforms If we take the Laplace transform of systems of ODE's, as in section 4.6 of the text, we get collections of equations involving the Laplace transforms of unknown functions with rational functions (quotients of polynomials) as coefficients. For example, if you've done some problems in that section of the book, you would expect several equations similar to what follows:
[(s-1)/(s2+4)]X(s)+ {s/(s+2)3]Y(s)=1/(s+7)+2s -7
and then we want to find expressions for one of the unknown functions (X(s) or Y(s)) in terms only of s, and then take the inverse Laplace transform. The initial conditions are all invisibly part of equations like the one I just wrote. The vectors here are X(s) and Y(s), and the scalars are rational functions such as [(s-1)/(s2+4)] and {s/(s+2)3] and 1/(s+7)+2s. You can multiply and divide and add and subtract rational functions. The results will again be rational functions.
• Fourier series We will look at this in the last part of the course. Fourier series look like 3sin(5t)+5cos(7t): sums of constants multiplied by trig functions. The applicable trig functions here will be sine and cosine (not cosecant, thank goodness!). The vectors are functions like sin(5t) and cos(7t), and the scalars in this setting are usually real constants. These series are an extremely useful way to analyze certain partial differential equations, PDE's, and their boundary value problems. These problems occur when trying to understand heat flow, diffusion, vibrations ... lots of stuff. There also turns out to be a complex version of these sums, involving things like (5+2i)e3i t. Here the scalars are complex numbers, while the vectors are functions like e3i t.
• Digital signal processing Here the sums are sort of square waves which are the vectors, and the scalars essentially turn out to be (this seems ludicrous when first encountered!) just 0 (off) and 1 (on). Extremely elaborate ideas and algorithms having to due with storage and efficient transmission and transformation of signals are expressed in this language. A chief algorithm is the Fast Fourier Transform (connected with such tools as CAT scans and magnetic resonance), and one way of looking at this algorithm is that it involves writing a matrix as a product in a particularly efficient way. The scalars seem like a ludicrously small collection, but signals are constructed with many, many scalars put together.

HOMEWORK
I would like to give an exam in two weeks, on Monday, October 10. The exam would cover our work on Laplace transforms and some basic linear algebra. Please check the draft version of a formula sheet for the first exam and give me comments.
The exam will cover what we've done on Laplace transforms and two lectures on linear algebra, so you probably should read ahead about linear algebra (see the syllabus & textbook problems.
Further information about the exam will be available soon.

 Confession

### Thursday, September 22

 Laplacetransforms
Heaviside
Oliver Heaviside wrote:
Should I refuse a good dinner simply because I do not understand the process of digestion?
Here he referred to his "operational calculus", which was a nontraditional method of constructing mathematical models. The method gave useful answers rapidly and directly, but in many cases could not be rigorously justified using techniques then accepted. Generally, I think Heaviside would have agreed with the following statements:

• Almost every reasonable limit should exist.
• Almost every reasonable function should have a derivative.
Agenda for today's lecture
1. A simple mechanics problem
2. Some math mumbles
3. Chem engineering problem
4. Several mechanical engineering problems

A simple mechanics problem
I asked people to consider the frictionless eraser. I tried to slide an eraser along the narrow shelf at the bottom of the blackboard, asking people to imagine there was no friction. My mathematical analysis (!) of this problem used F=ma, a wonderful equation. So here we go: I use F=ma. I will assume that we are analyzing the motion of a particle on a line. At time t, the particle is at x(t). I will also assume (since this is a very simple model!) that m=1 for all time t. If the particle starts from 0 at time 0 with velocity 0, the initial conditions are x(0)=0 and x´(0)=0. The force will vary with time. What can one expect of such a problem?

Then setup
 Here's an example of a possible force varying with time, t. Initially, the particle would not move. Then, yes, indeed!, it moves as t increase (once t gets to the region where F(t)>0). Since F(t)>0 there, the particle moves to the right, or (if we consider a graph of t, time, versus, x(t), position) "up". Here is what a graph of x(t) might look like, qualitatively. What is happening? For early time, before the bump in F(t), the particle doesn't move at all. After the bump in F(t), the particle does move, and the amount of movement, it turns out, depends only on the total area under the bump. The total area determines the slope. The particle moves at uniform speed because there is no new force acting on it. What doess the total area represent? Well, work is force·distance, but this is essentially force·time, which in this setup is momentum. (Why? Since x´´(t)=F(t), x´(t)=p(t), the momentum (remember here m=1 always). Thus p´(t)=F(t), and the total area under the F(t) is the change in momentum, as Mr. Wolf suggested. Professor M. Kiessling, a wonderful mathematical physicist, helped (forced?) me to agree to this. In this simple setup, I think that work=energy and momentum gained are proportional. In any case, after the bump, the curve showing positive is linear, with a positive slope. The slope is directly proportional to the total net area.

The example
 The setup Suppose the graph of F(t) is as shown. Let's find the motion of the point. Since x´´(t)=F(t) with x´(0)=0 and x(0)=0, and this is Math 421, we should use the Laplace transform. Well, we can write F(t) in a suitable form: F(t)=(1/A)U(t)-(1/A)U(t-A). Then take the Laplace transform of x´´(t)=F(t) and get, on the left-hand side (with initial conditions built in), s2X(s)-sx´(0)-x(0). On the right, we will get (1/A)(e-0·s/s)-(1/A)(e-As/s). Therefore we have s2X(s)=(1/A)[1-e-As]/s and X(s)=(1/A)1/s3-(1/A)e-As(1/s3). We now take the inverse Laplace transform. You will need to know the table and the second translation theorem! x(t)=(1/A)(1/2)t2-(1/A)(1/2)(t-A)2U(t-A). For 0A, x(t)=(1/A)(1/2)(2At-A2)=(1/2)(2t-A)=t-A/2. It can be checked that x(A)=A/2 for either definition, so the motion is continuous (the eraser didn't suddenly and magically jump!) and actually, the function described is differentiable: the tangent lines agree at the "splices" (t=0 and t=A). A graph of the position is shown. Limits of the motion and what we expect What happens now as A-->0? Remember: Almost every reasonable limit should exist! The force had Laplace transform (1/A)(1/s)-(1/A)e-As(1/s)= (1/s)(1/A)[1-e-As]. As A-->0, we can try to evaluate the limit since "Almost every reasonable limit should exist." This is the algebraic side of the picture of the particle's motion. Good engineers try first to just "plug in": when A=0 here, the result is 0/0. The next thought should be l'Hopital's rule. The limit of (1/s)[1-e-As]/A as A-->0 is an indeterminate form of appropriate type (0/0). So take the derivative of the top and bottom with respect to A. The result is (1/s)[0-(-s)e-As]/1. As A-->0, this goes to 1. Therefore, the Laplace transform of the limit of the boxes should be 1. Geometrically, the area inside the colored circle is shrinking to the origin at A-->0, and the limiting solution is just two straight half-lines. The impressed force, F(t), becomes more and more instantaneous. The motion becomes more and more like a horizontal line for t<0 and a ray, x(t)=t, for t>0.

Dirac delta "function"
Dirac, a Nobel prize-winning mathematical physicist, worked on quantum mechanics and relativity. Here is an interesting quote from Dirac: "I consider that I understand an equation when I can predict the properties of its solutions, without actually solving it." A contemporary interview with Dirac may give some idea of his personality.

The Dirac delta function is the limit of these square impulse functions as A-->0+. The Dirac delta function, delta(t), is 0 away from 0. Its total integral is 1. You should think about it as an instantaneous bump, a limit of the boxes above as A-->0. I computed some integrals. They resembled the following:

I computed the integral -30040delta(t-2) dt: this was 1.
The integral -300-200delta(t-2) dt was 0. The bump or impulse was centered at 2, and 2 is not in the interval from -300 to -200.
I tried some further computations. I think one was something like -30040delta(t-2)(t2-5t) dt. Since the delta function is 0 away from 2, only what multiplies it at (or close to) 2 matters. But close to 2, the function t2-5t is close to 4-10=-6. Therefore this integral is just -6.

Remember: Almost every reasonable function should have a derivative.
Let's approximate U(t) by another function, fepsilon(t). Here you should think of epsilon as a very small positive number. For negative t, fepsilon(t)=0. For positive t>epsilon, fepsilon(t)=1. In a small interval immediately to the right of 0, fepsilon(t) increases smoothly from 0 to 1. You are supposed to think that fepsilon(t) closely approximates U(t). Then fepsilon´(t) is a bump localized (?) in a small interval to the right of 0. The total integral under the bump is 1, using the Fundamental Theorem of Calculus since -1037fepsilon´(t) dt=fepsilon(t)]-1037=fepsilon(37)-fepsilon(-10)=1-0=1.

If we multiply fepsilon´(t) by a continuous function g(t), in the small interval (a really small interval!) g(t) will hardly vary and will be almost equal to g(0). So the integral of fepsilon´(t)g(t) will be approximately g(0) multiplied by the integral of fepsilon, which is 1, so the result is approximately g(0). Thus fepsilon´(t) is approximately delta(t). So as epsilon-->0+, fepsilon-->U and fepsilon´-->delta. So, "clearly" U´=delta.
I sketched these ideas here because many people in the engineering and applied science communicty (physics, chemistry) find them useful. Maybe you will, also.

I wanted to make the Chem E's happy. Here is a two-part problem which I found last year. It is from
an essay by Kurt Bryan of the Rose-Hulman Institute of Technology:

 A salt tank contains 100 liters of pure water at time t=0 when water begins flowing into the tank at 2 liters per second. The incoming liquid contains 1/2 kg of salt per liter. The well stirred liquid lows out of the tank at 2 liters per second. Model the situation with a first order ODE and find the amount of the salt in the tank at any time.

I expect students had modeled and solved such problems a number of times in various courses, and certainly in Math 244.

Let m(t) be the kgs of salt in the tank at time t in seconds. We are given information about how y is changing. Indeed, m(t) is increasing by (1/2)2 kg/sec (mixture coming in) and decreasing by (2/100)m(t), the part of the salt in the tank at time t leaving each second. So we have:
m´(t)=1-(1/50)m(t) and m(0)=0 since there is initially no salt in the tank.
Students seemed to realize that the solution curve should look like this: a curve starting at the origin, concave down, increasing, and asymptotic to m=50. In the long term, we expect about 50 kgs of salt in the tank. This is easy to solve by a variety of methods, but in 421 we should use Laplace transforms: so let's look at the Laplace transform of the equation. We get sM(s)-m(0)=(1/s)-(1/50)M(s). Use m(0)=0 and solve for M(s). The result is M(s)=1/[s(s+(1/50))]. This splits by partial fractions with some guesses to M(s)=50[(1/s)-(1/(s+(1/50)))]. It is easy (it should be easy for you by now!) to find the inverse Laplace transform and write m(t)=50(1-e-(1/50)t) which is certainly the expected solution.

Slightly more interesting ...

 Suppose that at time t=20 seconds 5 kgs of salt is instantaneously dropped into the tank. Modify the ODE from the previous part of the problem and solve it. Plot the solution to make sure it is sensible.

Students should expect a jump in m(t) at time 20 of 5, but then the solution should continue to go asymptotically to 50 when t is large. How should the ODE be modified to reflect the new chunk of salt? Here is one model:
m´(t)=1-(1/50)m(t)+5delta(t-20) and m(0)=0.
The delta function at t=20 represents the "instantaneous" change in m(t) at time 20, an immediate impulsive (?) increase in the amount of salt present.

It is very reasonable to ask if this is a good model. Please keep this is mind!

Back to solving m´(t)=1-(1/50)m(t)+5delta(t-20) and m(0)=0. Take the Laplace transform as before, and use the initial condition as before. The result is now sM(s)-m(0)=(1/s)-(1/50)M(s)+5e-20s from which we get M(s)=50[(1/s)-(1/(s+(1/50)))]+5[e-20s/(s+(1/50))] which has inverse Laplace transform m(t)=50(1-e-(1/50)t)-5U(t-20)e-(1/50)(t-20). Of course I wanted to check my answer, so I used Maple. Here is the command line and here is Maple's response:

```> invlaplace(1/(s*(s+(1/50)))+5*exp(-20*s)/(s+(1/50)),s,t);
t                                      t
-50 exp(- ----) + 50 + 5 Heaviside(t - 20) exp(- ---- + 2/5)
50                                     50```
so I am happy. I should remark that last year's version of Maple gave this response:
100*exp(-1/100*t)*sinh(1/100*t)+5*Heaviside(t-20)*exp(-1/50*t+2/5)
Well, Maple multiplied 1/50 by 20 to get 2/5. Slightly more interesting is the sinh (hyperbolic sine, pronounced "cinch") part. Apparently users of Maple are expected to know that sinh(w)=(ew-e-w)/2. If you know that, then you can see that the old Maple's answer is equal to ours.

Some pictures Maple made for me

 Here's the solution curve for the problem just solved, with an instantaneous increase of 5kg at t=20. The curve jumps up, and then begins to approach 50. I modified the problem, and dumped in 30 kg of salt at time 60. So you can see the jump above the asymptote, and then the curve trends down towards 50.

Hitting a spring
Let's look at y´´+y=delta(t-Pi) with initial conditions y(0)=0 and y´(0)=0. O.k.: to me, what I hope is interesting for the students in this course is the physical interpretation of everything. Then we should "solve" it, and then we should consider the solution from the physical point of view: if it makes sense, this helps to validate the method we used.

Well: y´´+y models a vibrating spring with no damping, a rather ideal situation. The right-hand side, delta(t-Pi) is a "forcing function". In fact, here it correspods to hitting the spring instantaneously (!) with a "force" of 1 at time Pi. The initial conditions tell us that the spring is originally in equilibrium. At time Pi, the spring is hit but there is no further force (so far). Probably we should expect that the spring will somehow vibrate in an ideal fashion. Let us try our wonderful Laplace routines. The Laplace transform of y´´+y=delta(t-Pi) is s2Y(s)-sy´(0)-y(0)+Y(s)=e-Pi·s which turns out to be (s2+1)Y(s)=e-Pi·s or Y(s)=e-Pi·s/(s2+1). The predicted vibration of the spring is therefore (!) the inverse Laplace transform of e-Pi·s/(s2+1). I can read that off from the (mythical?) table using the second translation theorem and the result is y(t)=sin(t-Pi)U(t-Pi): the spring, responding to an instantaneous load of 1 unit, then vibrates sinusiodally.

Hitting a spring again
Keep the initial conditions the same, but change the forcing function. So here look at y´´+y=delta(t-Pi)+delta(t-2Pi). Now go through the Laplace transform, using the initial conditions:
s2Y(s)-sy´(0)-y(0)+Y(s)=e-Pi·s+e-2Pi·s becomes
s2Y(s)+Y(s)=e-Pi·s+e-2Pi·s and then
(s2+1)Y(s)=e-Pi·s+e-2Pi·s
so that
Y(s)=[e-Pi·s+e-2Pi·s]/(s2+1).
The inverse Laplace transform tells me that
y(t)=sin(t-Pi)U(t-Pi)+sin(t-2Pi)U(t-2Pi).

It may be useful to understand what this algebraic mess means.

• For example, if 0<t<Pi, y(t)=0: the position of the spring doesn't change from rest before it is hit.
• If Pi<t<2Pi, then y(t)=sin(t-Pi), a sine wave "starting" when time is Pi. We in fact verified, both by pure thought (move the sine wave Pi units) and by using the addition formula for sine, that sin(t-Pi)=-sin(t), so in this interval, y(t)=-sin(t), a positive bump.
• If 2Pi<t, then y(t)=sin(t-Pi)+sin(t-2Pi). Since sine is periodic with period 2Pi, sin(t-2Pi)=sin(t) (again, you can use the addition formula for sine if you wish). But sin(t=Pi) is still -sin(t), so that for these t's, y(t)=-sin(t)+sin(t)=0.
The two impulses exactly cancel (remember, there is no damping in this model!). That's the story. More next time.

HOMEWORK
The grader, a wonderful human being, talked to me about the homework. She encouraged me to ask some even numbered questions. She also remarked that students should give some reasons for their transitions from the problem statements to the problem solutions. I emphatically agree with that, since this will be needed on exams.
Monday's lecture will be the last on Laplace transforms. Please read the appropriate sections of the book and do more than the minimal number of problems! Please hand in Monday these problems: 4.4: 26, 33 and 4.5: 8, 9.

### Monday, September 19

 Confession
I asked students to do a number of problems at the board.

Mr. Clark intelligently remembered a fact. So he did the following: the Laplace transform of e-9t is 1/(s+9), so the Laplace transform of te-9t is -d/ds[1/(s+9)], and this is 1/(s+9)2. I needed to be reminded that there are two minus signs in the result and they cancel. Sigh.

Find the Laplace transform of e-5ttU(t-2)
This problem is vicious (viscous?). It involves a concatenation of both translation results.
Today's word concatenation
To connect or link in a series or chain.
Let's do this two ways.

• First translation then second
So e-5ttU(t-2) becomes e-5t(tU(t-2)). Now the Laplace transform of tU(t-2) is e-2s(1/s2+2/s). We need to "plug in" s+5 for each s in this answer. The result is e-2(s+5)( 1/(s+5)2+2/(s+5))
• Second translation then first
So e-5ttU(t-2) becomes (e-5tt)U(t-2). Now g(t)=e-5tt and, since a=2 here, g(t+a)=g(t+2)=e-5(t+2)(t+2)=e-10e-5t(t+2). This has Laplace transform e-10(1/(s+5)2+2/(s+5)). Finally, to finish the use of the second translation theorem, we need to multiply by e-2s, so the whole answer is e-2se-10(1/(s+5)2+2/(s+5)).
• Huh?
Thank goodness the answers agree. I think I would use the first method, but this is solely a psychological (psychotic?) choice. I wanted to see what a silicon friend would do, so I asked:
```> with(inttrans);
[addtable, fourier, fouriercos, fouriersin, hankel, hilbert, invfourier, invhilbert, invlaplace,

invmellin, laplace, mellin, savetable]

> laplace(exp(-5*t)*t*Heaviside(t-2),t,s);
exp(-2 s - 10) (2 s + 11)
-------------------------
2
(s + 5)```
The answer has been made "pretty", and I can't tell how it was computed.

The QotD from last time
Ms. Jones kindly put her solution on the board. The given information was a mixture of geometric and algebraic. The graph of a function was drawn. It was 0 for t<1 and t>3, and was -(t-1)(t-3) for t between 1 and 3. I asked for the Laplace transform of this function. I emphasized that it would probably be easier to begin by describing the function using the Heaviside function.

• Translation with Heaviside
We "turn on" -(t-1)(t-3) at 1, and then must remember to turn it off at 3. So write -(t-1)(t-3)U(t-1)-[-(t-1)(t-3)]U(t-3). This is -(t-1)(t-3)U(t-1)+[(t-1)(t-3)]U(t-3)

• Laplace transform
Since Laplace transform is linear, we will work separately with each piece. And, on each piece, we'll use the second translation theorem: the Laplace transform of g(t)U(t-a) is e-as multiplied by the Laplace transform of g(t+a).
For -(t-1)(t-3)U(t-1), we see that a=1, and g(t)=-(t-1)(t-3). So g(t+a)=g(t+1)=-(t+1-1)(t+1-3)=-t(t-2)=-t2+2t. In replacing t by t+1 in g, you've got to be careful and replace every appearance of t in the function's algebraic description. Now the Laplace transform of -t2+2t is -2/s3+2/s2. And we must multiply this by e-as=e-s. The result is e-s(-2/s3+2/s2).
For -(t-1)(t-3)U(t-1), a=3, so g(t+a)=g(t+3)=-(t+2)t=-t2-2t, whose Laplace transform is -2/s3-2/s2. We need to multiply by e-as=e-3s. Here the result is e-3s(-2/s3-2/s2).

This should be the sum of the two pieces, and it is e-s(-2/s3+2/s2)+e-3s(-2/s3-2/s2).

• Is there no way to check anything in this?
Well, yeah, there is and I just did it.
We know that as s-->0+, F(s) should get close to the total area under the bump in the picture. And I just checked. It does!. You can do this, also. (The area under the bump is 4/3, and now evaluate the limit.)

Think about this result, and let's include a computation from the previous class. Let me rearrange things algebraically a bit.

FunctionIts Laplace transform
t2 2/s3
t3 6/s4
(1/60)t6 {6!/60}/s7
Notice that 6!/60 is 12.
eAt 1/(s-A)
eBt 1/(s-B)
[1/(A-B)]{eAt-eBt}. [1/(A-B)]{1/(s-A)-1/(s-B)}.
Notice that, combining fractions, this
happens (?) to equal 1/[(s-A)(s-B)]

Theoretical consequences

What is the convolution of e3t and sin(5t)?
I'll try the same idea. The Laplace transform of e3t is 1/(s-3) and the Laplace transform of sin(5t) is 5/(s2+52). Therefore the Laplace transform of the convolution is the product of these Laplace transforms:
5/[(s-3)(s2+25)]. We can "recognize" this as the Laplace transform of something if we use partial fractions:

```       5          A     Bs+C
-------------- = --- + -------
[(s-3)(s2+25)]   s-3    s2+25```
Now combine the fractions, and let's look at the top of the result:
5=A(s2+25)+(Bs+C)(s-3).
Set s=3, and get 5=A(9+25)=34A, so A=5/34.
Since there are no s2 terms on the left-hand side, A+B=0 and B=-5/34. Now let's get C:
We could look at the s coefficients. 0=-3B+C, so C=3B=-15/34.

Therefore we will find the convolution of e3t and sin(5t) by discovering the inverse Laplace transform of (5/34)/(s-3)-(5/34)s/(s2+25)-(15/34)/(s2+25).
I can do this by looking things up in my handy table of Laplace transforms. (If I use the table enough, then I won't need it!) So the answer is
(5/34)e3t-(5/34)cos(5t)-(3/34)sin(5t). (The 15 becomes 3 because the Laplace transform of sin(5t) already has a 5 "on top").

We should do this:

Step 1
If F(s) is the Laplace transform of s, then we have F(s)=(1/s)+(2/s2)-(8/3)(6/s4)F(s) because a convolution turns into a product under Laplace transform.

• Step 2
So F(s)+(16/s4)F(s)=(1/s)+(2/s2) and [(s4+16)F(s)]/s4=(1/s)+(2/s2) so that F(s)=[s3+2s2]/[s4+16].

• Step 3
We need the inverse Laplace transform of [s3+2s2]/[s4+16]

Back to the problem. We need to find the inverse Laplace transform of [s3+2s2]/[s4+16]. The poor old prof pooped out (p.o.p.p.o.) here. Well, I can, darn it, factor s4+16. First, I solve s4=-16. This means s2=+/-4i, and then another square root gives me s=sqrt(2)(+/-1+/-i). What the heck is going on? Well, if I want to do it only with real numbers, I notice that s4+16 is never 0 (it is always at least 16 for real s's), so it has no real roots.
I will guess (don't do this yourself!) that s4+16=(s2+As+4)(s2+Bs+4. The right-hand side multiplies out to be s4+16+other terms. The other terms are
s terms with coefficients 4(A+B)
s2 terms with coefficients AB+8
s3 terms with coefficients A+B.
So choose A=-B, then AB+8 will be 0 exactly when A=2sqrt(2) and B=-2sqrt(2). The irreducible factors are
s2+2sqrt(2)s+4 and s2-2sqrt(2)s+4.

YUCK!!!
I do prepare, and I had looked at the answer in the back of the book. The answer there is: (3/8)e2t+(1/8)e-2t+(1/2)cos(2t)+(1/4)sin(2t). What the heck! Maple asserts that the inverse Laplace tranform of [s3+2s2]/[s4+16] is

```> invlaplace((s^3+2*s^2)/(s^4+16),s,t);
1/2       1/2         1/2                1/2          1/2     1/2          1/2
1/2 2    sinh(2    t) cos(2    t) + 1/2 cosh(2    t) (sin(2    t) 2    + 2 cos(2    t))```
This is consistent with the factorization I got above for s4+16. Huh. But it is not the same as the answer in the back of the book. In fact, the answer in the back of the book is the answer if the minus sign in the problem is changed to a plus sign.

Restated book problem
f(t)=1+t+(8/3)0t(t-tau)3f(taudtau.
Now we need the inverse Laplace transform of [s3+2s2]/[s4-16]. I can factor s4-16 and I hope you can also. First, it splits up as (s2-4)(s2+4), and then further as (s-2)(s+2)(s2+4). So we need A, B, C, and D so that

```s3+2s2    A     B    Cs+D
------ = --- + ---  -----
s4+16    s-2   s+2   s2+4
```
Now push the fractions together on the right, and look at the tops of both sides. The equation resulting is
s3+2s2=A(s+2)(s2+4)+B(s-2)(s2+4)+(Cs+D)(s-2)(s+2)
This is not pleasant, but at least I can solve it "by hand". I hope you can see that the first term will give (using inverse Laplace) e2t, the second term will give e-2t, and the third term will give stuff involving sin(2t) and cos(2t). The real reason I am not continuing is that the answer in the back of the book is still incorrect. For example, s=2 gives me 16=A(32) so A=2. And the book's answer implies that A=3/8.

If any student sees any errors I made, please tell me. Here is a link to a solution of problem #38 in section 4.4 which I did a year ago. At least I think it did it correctly! It is similar to the problem I attempted here!

HOMEWORK
Please read ahead in the book. The next section (4.5) has the most mind blowing part of Laplace transforms.

### Thursday, September 15

 Confession
Writing the Laplace transform table
I wrote what we knew about Laplace transforms so far, and remarked that although there were (literally!) books of Laplace transforms, we'd have only a few more entries.

We talked a bit more about how the Heaviside function could be used to write piecewise-defined functions. In particular, I mentioned that Maple has the name Heaviside reserved for this function.
I know little about Matlab but Mr. O'Sullivan wrote me the following message:

```Interestingly enough, the Heaviside function is only recently (version 7)
supported by default in MATLAB. But, it works as expected. Looks like they
chose to just leave U(0) undefined, though.

>> x=-5:5

x =

-5    -4    -3    -2    -1     0     1     2     3     4     5

>> heaviside(x)

ans =

0     0     0     0     0   NaN     1     1     1     1     1

Trying the function in MATLAB <= 7 or Octave (an open source MATLAB
"clone") results in brokenness.
```

Since this is a 640 course, I should give some justification of this result. So I wrote the following:
The Laplace transform of U(t-a)f(t-a) is (by definition) 0infinitye-stU(t-a)f(t-a) dt. But U(t-a) is 0 for t<a and is 1 for t>a. So the integral doesn't need to start until a and 1 can changes U(t-a) into 1 in the part with t>a. That is, 0infinityU(t-a)BLAH=0aU(t-a)BLAH+ainfintyU(t-a)BLAH=ainfinityBLAH.
So we have ainfinitye-stf(t-a) dt. We can change variables in this integral. If w=t-a (remember, a is a constant) then dw=dt and t=w+a so that:
t=at=infinitye-stf(t-a) dt=w=0w=infinitye-s(w+a)f(w) dw=w=0w=infinitye-swe-saf(w) dw=e-saw=0w=infinitye-swf(w) dw. This is exactly e-asF(s) because w is the variable of integration -- it doesn't matter outside of the integral sign.

What about the inverse Laplace transform of, say, e-3s/s7? This should come from a translated version of Constant·t6. The -3 part leads to (t-6)5 but we need to adjust for the proper constant. So I think this should be (1/6!)(t-3)6U(t-6).

QotD
What is the Laplace transform of the function which is -(t-1)(t-3) for t between 1 and 3, and 0 otherwise?

Since F(s)=L(f)(s)=0infinitye-stf(t) dt we could try to differentiate F(s). Thus (d/ds)F(s)=(d/ds)0infinitye-stf(t) dt.

Now push (!) the derivative inside the integral sign:
(d/ds)F(s)=0infinity(d/ds)e-stf(t) dt. and (d/ds)e-stf(t)=-te-stf(t) since d/ds only notices appearances of s.

(d/ds)F(s)=0infinity-te-st\f(t) dt=-0infinitye-stt·f(t) dt which is minus the Laplace transform of t·f(t).

I remarked that the interchange of derivative and integral is not always valid, and manipulations of this sort probably helped Heaviside get into trouble with the academic establishment. Look only briefly at the following example which verifies that the interchange is not always true.

Here is an intricate example copied from Counterexamples in Analysis by Gelbaum and Olmstead, showing that "differentiation under the integral" is not always valid. Knowing the details of this example is definitely not part of Math 421, certainly, but I include it here so that near maniacs can verify there may be some problems with interchanging differentiation and integration.
The example uses the function f(x,y)=(x^3/y^2)exp(-x^2/y) if y is not 0, and 0 if y=0.

What the heck does this function look like? For fixed x it is continuous in y. For fixed y, it is continuous in x. But it is not continuous in x and y jointly. Look at the curve y=x^2. So f(t,t^2)=(1/t)e-1, and that is certainly not continuous at t=0. Slices with y=constant reveal a bump which gets higher and moves closer to the origin as y-->0+.

The integral with respect to x can be done easily (the function is x3e-x2 with various constants thrown in!). So y=0y=1f(x,y)=xe-x2, and the derivative of this integral is e-x2(1-2x2). The value of this at x=0 is 1.

Now the partial derivative of f with respect to x is (for x not equal to 0) e-x2/y( 3x2/y2-2x4/y3 ). The integral of this mess dy from y=0 to y=1 is e-x2(1-2x2).

For x=0, the partial derivative is 0 (look at the formula!), and the integral is 0.

Notice now that the integral of the partial derivative with respect to x at x=0 is 0, but the derivative of the integral of the function with respect to y at x=0 is 1. These values are not the same!

Background
The letters t and tau are used everywhere. It is important to keep them straight. One useful way of doing this is to remember that the sum of the arguments of f and g is t, one has a minus sign, and both have tau's. Google gives me about 12,400 references for "convolution chemical engineering" and 16,000 for "convolution mechanical engineering".

What's the convolution of t2 and t3? This is 0ttau2(t-tau)3dtau. After a huge amount of debate, we decided (please see here or here if this is unfamiliar to you) that (t-tau)3= 1t3-3t2tau1+3t1tau2-1tau3 and therefore
0ttau2(t-tau)3dtau= 0ttau2[1t3-3t2tau1+3t1tau2-1tau3]dtau= 0ttau2t3-3t2tau3+3t1tau4-tau5dtau= (1/3)tau3t3-(3/4)t2tau4+(3/5)t1tau5-(1/6)tau6]tau=0tau=t= (1/3)t6-(3/4)t6+(3/5)t6-(1/6)t6.
Wow, what a computation. Of course, the result can be simplified (who could possibly care?) to get (1/60)t6.

Mr. O'Sullivan suggested the following web page: http://mathworld.wolfram.com/Convolution.html. It has a discussion of convolution and an animation picture of several convolutions.

### Monday, September 12

 Confession
The table of Laplace transforms

A textbook problem
I daringly attempted problem #36 in section 4.2. 36 is a large number and even, so the problem must be difficult and there is no answer in the back of the book!

Section 4.2, Problem #36 Solve the Initial Value Problem:
(ODE) y´´-4y´=6e3t-3e-t (IC's) y(0)=1, y´(0)=-1

As I remarked, this is a very simple ODE: it is second order, linear, constant coefficient, inhomogeneous. What methods would I expect/hope that students could use on this problem?

Method #1 First solve the "associated homogeneous equation", y´´-4y´=0. Plug in ert, get the characteristic equation, etc. Attempt to get a particular solution using a variety of methods (undetermined coefficients or throwing dice or ...).

Method #2 Let y1=y and y2=y´. Build a column vector
Y=( y1 y2)t. Note: the silly superscript t means transpose (rows go to columns), and I am using it here (didn't use it in class yet!) because it is harder to type a column vector in html than a row vector. Sigh. Now if we define a 2-by-2 matrix, A, to be this:
(0 1)
(4 0)
then I hope that the matrix differential equation Y´=AY+S (S=other stuff I don't want to bother with now) is the same as the original problem. One can then apply methods of linear algebra (we'll do some of this later in the course) to solve the matrix DE which in turn will lead to a solution of the original equation.

Method #3 Use numerical techniques to approximate a solution. For many applications, this is just as good. For some applications such as those requiring long-term asymptotics as a function of some symbolic initial conditions, for example, numerical methods aren't too useful. Errors in ODE "stepping" methods (the most elementary of which is Euler's Method, and one frequently used method is RKF4, a Runge-Kutta method) tend to accumulta.

Method #4 Use the Laplace transform, which I'll try in a second. However, there is a natural question: why learn another method? Well, for this initial value problem almost every method "works". But each method has its own flavor (?) and gives a different perspective on the ODE. None of these methods will apply perfectly or easily or every problem. Having a variety of methods will allow you to analyze solutions of ODE's better.

This is proved by repeated integration by parts. I verified the n=1 case last time. This result really shows why the Laplace transform is very well adapted to initial value problems (it was invented for them, darn it!). Most of the time in this course we will use this result with n=1 or n=2 but there are certainly lots of cases in real life with other n's. ME alert n=4: vibrating beam. Please see example #9 on p.211 of the text.

When n=2, the result states that the Laplace transform of y´´ is s2Y-sy(0)-y´(0). Here we know that y(0)=1 and y´(0)=-1 so that the Laplace transform of y´´ is s2Y-s+1. When n=1, the result states that the Laplace transform of y´ is sY-y(0). Again using y(0)=1, we get sY-1 as the Laplace transform of this y´. Therefore (sigh) the Laplace transform of the left-hand side of the original equation (y´´-4y´, including the stated initial conditions) is s2Y-s+1-4(sY-1)=(s2-4s)Y-s+1+4.

So the Laplace transform of
y´´-4y´=6e3t-3e-t, y(0)=1, y´(0)=-1
is
s2Y-s+1-4(sY-1)=[6/(s-3)]+[-3/(s+1)].
(The ODE and the initial conditions are all together in that one equation!)

Now what? Now we "solve" for Y, the unknown Laplace transform. And we attempt to locate pieces of Y on the Laplace transform side of the table which was on the board. Let's do this.

s2Y-s+1-4(sY-1)=[6/(s-3)]+[-3/(s+1)] becomes
(s2-4s)Y-s+5=[6/(s-3)]+[-3/(s+1)] which gives us
Y=[s/(s2-4s)]-[5/(s2-4s)] +[6/{(s2-4s)(s-3)}]+[-3/{(s2-4s)(s+1)}].

In order to find y(t), we should get the inverse Laplace transform of Y(s). Using linearity, we "only" need to get the inverse Laplace transforms of each of the pieces. I think I only analyzed one of the pieces before I gave up: [6/{(s2-4s)(s-3)}].

So we use partial fractions. The bottom of this fraction is actually (s-4)s(s-3), so the partial fractions cell in my brain springs into action: we should be able to write the fraction as the sum of
[A/(s-4)]+[B/(s)]+[C/(s-3)]=[{A(s)(s-3)+B(s-4)(s-3)+C(s-4)(s)}/{(s2-4)(s-3)}].
If we combine the fractions, we know that 6 is supposed to be equal to A(s)(s-3)+B(s-4)(s-3)+C(s-4)(s). The conclusion is easy:
s=0 gives 6=12B so B=1/2.
s=-4 gives 6=4A so A=3/2.
s=3 gives 6=-3C so C=-2.
Therefore we need to find the inverse Laplace transform of [{3/2}/(s-4)]+[{1/2}/(s)]+[-2/(s-3)] which is {3/2}e4t+{1/2}e0t+{-2}e3t.

I just got Maple 10 at home. This is the same version of Maple which is in the Rutgers computer labs. Here's a little bit of the dialogue. # on a line indicates a comment and Maple ignores it.

```> with(inttrans);  # loads various useful diff'l equations transforms

[addtable, fourier, fouriercos, fouriersin, hankel, hilbert, invfourier, invhilbert, invlaplace,invmellin, laplace, mellin, savetable]

> invlaplace(6/((s^2-4*s)*(s-3)),s,t);
3/2 exp(4 t) - 2 exp(3 t) + 1/2
```
So I guess we were correct.

Then I gave up, at least computing by hand. But:

```
> invlaplace(s/(s^2-4*s)-5/(s^2-4*s)+6/((s^2-4*s)*(s-3))-3/((s^2-4*s)*(s+1)),s,t);
11
-- exp(4 t) + 5/2 - 2 exp(3 t) - 3/5 exp(-t)
10
```
I bet that (11/10)e4t+5/2-2e3t-(3/5)e-t is a likely answer. For your amusement, I remark that I had to try three times before I typed in the correct Maple command.

I have experience with several different versions of Maple and they sometimes don't always give the same versions of the answer. For example, some versions package a linear combination of e3t and e-3t as an equivalent linear combination of sinh(3t) and cosh(3t). This can be annoying. I've even found a version of Maple (not the current one!) which insisted on using i sinh(i t) instead of sin(t). So using symbolic manipulation software can have its problems as well as benefits!

As I remarked, polynomials have exponential growth. Maybe that is "clear" for t2, but not for, say, t216,000. In fact, I bet that for t large enough positive, |t216,000| is less than even .003e.000007t. We discussed this, and decided that this could be verified with L'Hopital's rule. Look at the quotient
t216,000/( .003e.000007t)
as t-->infinity. Both the top and the bottom go to infinity. Use L'H lots of times. If you use it 216,000 times, the result is
(Some huge constant)/[(Another stupid constant)e.000007t]
because the polynomial's powers go down to 0, and the bottom is still an exponential with positive exponent, and just "spits out" positive multiplicative constants upon differentiation. But the limit now, after 216,000 uses of L'H is 0, because the fraction is really just
CONSTANT/(exponential growth)
Therefore for large enough t, this is less than, say, 1.

Even something like t216,000e30003t has exponential growth. It is bounded eventually by some multiple of e30004t or even just e(30003.001)t.

Here is a candidate suggested by a student (who?) for a function which does not have exponential growth:
tt=eln(t)·t.
Indeed, I agree: ln(t) can't be bounded by ANY POSITIVE CONSTANT, so there's no B that will work.

What we're investigating here in Math 421 is the basic, classical Laplace transform. It has been fiddled with in many ways. So actually there are ways that functions with faster growth can be used. And there are ways that, say, difference equations (a discrete analog of differential equations) can be solved with similar tools. But let's learn the classical case well enough so that you will be able to use and understand the variant methods.

Inverse Laplace transforms theory interlude #2
Well, here in the board (I said, motioning to the side blackboard) is a table of Laplace transforms. It turns out that it is also, more or less, sort of, almost, also a table of inverse Laplace transforms. Here in a wonderfully logical 640 subject course what do the words "more or less, sort of, almost ..." mean?

Well, I considered the following example. Certainly our table tells us that the Laplace transform of t2 is 2/s2. Let's call t2 by the name, h(t), and let's further define the function g(t) by the stipulations that g(t) is -1 when t=3 and g(t) is 40 when t=6 and otherwise g(t)=t2. I know the darn picture is not drawn to scale, but the poetic truth is there. So, anyway, certainly h and g are distinct functions. But they have the same Laplace transform. Look, the formula for the Laplace transform of y(t) is 0infinitye-sty(t) dt. And the integral doesn't even notice if you change the values of the integrand (the function that's being integrated) at a couple of points. (Look at the darn definition of the integral, if you don't believe it, and take a Riemann sum with very, very narrow width around the jump places.) So there can be two functions whose Laplace transforms are exactly the same. Well, but for all practical purposes if you are in a physical situation, you won't notice that change at a point. And it turns out, more or less, that this is the only kind of problem you'll face.

Lerch's Theorem (I'm sorry but I like the name.)
If two functions have the same Laplace transform, then they differ only on a collection of values that the integral doesn't notice.

Yeah, this can be more precise, but that's all I want to say. But, you may ask, what if we are supposed to find an inverse Laplace transform? For example, the inverse Laplace transform of 2/s3? Well, yeah again, you do have to choose between h(t) and g(t). What everyone chooses is a function that is as continuous as possible. So if the left and right-hand limits agree at a point, that's the value the inverse Laplace transform should have at the point. And the algorithms in symbolic algebra packages are designed to make such a choice. And also, there is an inverse Laplace transform with an integral formula, called Bromwich's integral, which "automatically" gives you the most continuous (there's a phrase!) inverse Laplace transform. All these things can be implemented symbolically and numerically.

You can find lots of references to this stuff on the web.

Our major present goal is to expand our table of Laplace transforms (and, therefore, inverse Laplace transforms). The two shifting or translation theorems are very important in accomplishing this.

This is correct because the Laplace transform of eatf(t) is 0infinitye-steatf(t) dt which is the same as
0infinitye-st+atf(t) dt which is the same as 0infinitye-(s-a)tf(t) dt and that is exactly the Laplace transform of f(t) evaluated at s-a, or F(s-a).

The Heaviside function
I defined the Heaviside or unit step function, called U(t) in your text (actually with a calligraphic U).
Oliver Heaviside was a brilliant English engineer whose life was, overall, rather sad. U is the function which is 0 for t<0 and is 1 for t>=0. There's a jump of 1 at 0, and otherwise the function's graph consists of two half lines.
 I think I graphed a few examples, like U(t-3) (the jump is moved to 3) and U(t)+4U(t-3)-2U(t-6) (the graph "starts" at 0 with a jump up at 1, jumps up 4 (to 5) at 3, and then down 2 (to 3) at t=6 - aside from the jumps the graph just is pieced together from horizontal line segments.
Using U
U(t) allows us to write formulas for other piecewise defined functions.

Suppose y(t) is the function which is 0 for t<0, is t2 for t in the interval [0,1], is 1 in between 1 and 3, is (t-4)2 in the interval [3,4], and is 0 for "later" t. I hope I've drawn a picture of y(t) to the left.
What's a formula for y(t)? I sort of work from left to right in t. First there's nothing. Then we want to "turn on" t2 at t=0, so we need t2U(t). Then we need to turn off t2 at t=1, and turn on a height of 1. Hey, change the formula to t2U(t)+(-t2+1)U(t-1). Now let's continue to the right, where we need to turn of the height of 1, and turn on the downway parabolic arc. This change is (-1+(t-4)2)U(t-3) and should be added to what we already had. Then, finally, at 4, turn off (t-4)2 with -(t-4)2)U(t-4). So a formula for this function is t2U(t)++(-t2+1)U(t-1)+ (-1+(t-4)2)U(t-3) -(t-4)2)U(t-4)

Here is what Maple does. The first instruction defines the function, y(t). This uses the builtin function "Heaviside", what we call U(t). The second instruction plots it. And the plot is shown.

```
>y:=t->t^2*Heaviside(t)+(-t^2+1)*Heaviside(t-1)+(-1+(t-4)^2)*Heaviside(t-3) -(t-4)^2*Heaviside(t-4):

>plot(y(t), t=-1..6,thickness=3,scaling=constrained);

```
Now we are lost ...
This is all your fault. We are about 40% of a lecture behind. I think I will begin on Thursday by remarking that you can easily catch up, so I will begin where I should begin. Humph.

Return of the entrance exam
I retruned the Entrance Exam. I view this exam as a very useful diagnostic, which has shown some correlation with final course grades in Math 421. For me, it is useful because the exam shines a light on two aspects of students. First, it clearly shows student knowledge of representative content: everything asked on the exam will be relevent to parts of the course, and, actually everything asked on the exam is important in sections of the course. Second, given an Entrance Exam like this to a group of somewhat sophisticated educational "consumers" reveals a bit about their psychology: are they willing to work, to review, to ask questions ... students who do these things are more likely to learn, and not just barely survive. As I mentioned in class, this is a fairly advanced undergraduate course, and the person most resposible for teaching you is ... well, you! I hope I will be helpful, but stuff will go by very rapidly. I strongly recommend working with one or more other students in the course on a regular basis, meeting weekly or more often for an hour or two. This is likely to keep you "on task", and working together you will increase the odds of success.

### Thursday, September 8

 Confession
My official office hours will be Busch period 4 on Monday and Thursday, immediately after class. Since few of you are new college students, you of course know that these times are first chosen to be convenient for me. But, please, I will likely be in my office many other times during the week. You can try to drop me. I may be busy, but there's a good chance we'll be able to talk. Further I invite you to make a mutually agreeable appointment with me. Probably the best way to do this is by e-mail.

I hope you will look at this web page. Uhhh ... it is the page you are looking at now. If you aren't looking at it now, then maybe you could look at it some time. Or if ... oh well.

I began by reminding people of the definition of Laplace transform. The Laplace transform of a function y(t) is a function Y(s). Y(s) is 0infinitye-sty(t) dt. This is also called L(y(t)). Here L means what is usually written by a calligraphic capital L. It is standard notation for the Laplace transform.

0infinitye-steadt=0infinitye(-s+a)tdt=(Fundamental Theorem of Calculus, but be careful about what variable you are "integrating"!)=(1/[-s+a])e(-s+a)t]0infinity. Now what happens when "t=infinity"? This really means t-->infinity. So for large enough s (s bigger than a, for example) the exponential has t multiplied by some constant which is negative. Then the exponential is decaying so as t-->infinity, the exponential goes to 0 (more later). When t=0, since we're looking at the bottom of the ], we have -(1/[-s+a])e0 and e0=1, so that the answer is-(1/[-s+a]) or 1/s-a, as Theorem 4.1 states.
In many of these computations, there will be plenty of minus signs and much opportunity for error. I don't know how to avoid all errors.

Last time I reminded people that cos(theta) is (1/2)(eitheta+e-itheta). Therefore the Laplace transform of cos(kt) (so kt will be theta) is (using linearity of Laplace transform!) is 1/2 multiplied by the sum of the Laplace transforms of eikt and e-ikt. The Laplace transform of eikt is 1/[s-ki] (I'm using the result we just derived, with a changed to ki). The Laplace transform of e-ikt is 1/[s+ki] (I'm using the result we just derived, now with a changed to -ki). Therefore the Laplace transform of cos(kt) is (1/2)({1/[s-ki]}+{1/[s+ki]}). Some algebra which we did changes this to (1/2)({[s+ki]+[s-ki]}/{[s-ki][s+ki]}) and this is (1/2)([2s]/{s2+k2}) which is s/(s2+k2), the formula I wanted to check.

If you are confused by this, PLEASE try to compute the Laplace transform of sin(kt) using the same method. PLEASE!!!

So Y(s)=01e-sty(t) dt. Students immediately observed that we don't "need" the integral where part of the integrand is 0, so we should compute 01e-stt dt. Hey: this computation demands integration by parts.
Strategic note You will do more computations using integration by parts in this course than ever before during a similar period of your life. (And maybe [likely!] during any similar period for the remainder of your life!)
For integration by parts, I generally write all the details. I have made too many mistakes and I've gotten tired of fouling up. I usually write:

```01e-stt dt=
u dv = uv]   -  v du

u=   } { du=
dv=   } {  v=```
Then I try to make a good choice for u and see what happens. Your experience must guide you, but generally you want to "exchange" the u dv integral for a "better" v du integral.

Here we can use u=t so dv=e-st dt. Then du=dt and v=(-1/s)e-st. The integration by parts becomes
01e-stt dt= -(t/s)e-st]01-01(-1/s)e-st dt
Now the "boundary term" (as the uv term is sometimes called) is -(t/s)e-st]01 which contributes -e-s/s to our result.
-01(-1/s)e-st dt can be integrated directly and we get -{-{-1/s2}}e-st]t=0t=1 and this is -e-s/s2 (from t=1) -{-1/s2} (from t=0).
The total result is -e-s/s-e-s/s2+{1/s2} and, in one fraction, this is
(-se-s-e-s+1)/s2

One comment to make is that we have replaced the rough function by a function given by complication combinations of nice smooth functions. Still, it isn't clear that this is a "win" unless further good things occur. They will.

Now what about s-->0+? O.k., the exponential e-st still decreases, but it decreases more slowly. (Uhhhh ... these heurisitics can be made precise -- did you think this was a math course?). So for more and more t, e-st gets closer and closer to 1 as s-->0+. When we multiply this by y(t), the result for lots of t will be close to y(t), and the total integral will look more and more like the net area (area over the t-axis counted as positive, under the t-axis as negative) of y(t). So I think that as s-->0+, Y(s)-->the net area under all of the y(t) function.

Well let's check both of these asymptotic statements with our little triangle function. So Y(s)=(-se-s-e-s+1)/s2. Certainly as s-->infinity, the e-s terms decay, and the s2 in the bottom takes care of the 1. So Y(s)-->infinity as s-->infinity.
What about as s-->0+? Well, heck, you folks are engineering students. I would first try "plugging in" s=0. The result is 0/0. I know what to try next: l'Hopital. So we d/ds the top and bottom of Y(s) separately. The result is (if I do it correctly!) (-e-s+se-s+e-s)/(2s). Now plug in again. Hey, the result is ... uhhhh ... 0/0. You could use l'H again, but in fact, if you clean up the fraction algebraically you'll see that the result is e-s/2, and as s-->0+, this certainly --> 1/2, the area of that little triangle region.

### Diary entry in progress! More to come.

two little triangles, change of variables

### Diary entry in progress! More to come.

infinitely many triangles (!)

Well, let's start with 0infinitye-stf´(t) dt, which is the Laplace transform of f´(t). Integration by parts can be used, with u=e-st and dv=f´(t) dt, so that du=-s e-stdt and v=f(t). Therefore
0infinitye-stf´(t) dt=e-stf(t)]0infinity-0infinity(-s e-stf(t) dt.
There is all sorts of sneaky stuff going on here, and we should be very careful. Let's see. The boundary term is e-stf(t)]0infinity. Now s>0, so when t-->infinity, e-stf(t)-->0. Technically I am using the fact that f(t) doesn't grow too fast (in fact, eventually it is "killed" by exponentials decreasing fast enough, e-st for large s), but let's just try to get the mood of this method. When t=0, we get -f(0) since e0=1. What about the integral term? Notice that there are two minus signs. They cancel. What's left is s multiplied by the Laplace transform of f. So now we know:

L(f´)(s)=-f(0)+sL(f)(s).

The Question of the Day (QotD)
Partial fractions (from the dawn of time: calc 2) states that [(3s)/{(s2+1)(s+2)}]=A/(s+2) +[(Bs+C)/(s2+1)]. Find A and B and C.
Solution Combine the fractions and just look at the tops. Then we get 3s=A(s2+1)+(Bs+C)(s+2), so if s=-2, then -6=5A and A=-6/5. The s2 coefficients in the equation must cancel (no s2's in 3s!) so B=6/5. Considering the constant term gives C=3/5.

Therefore Y(s)=[4/(s+2)]+A/(s+2) +[(Bs+C)/(s2+1)] with A and B and C as above. Look at the results of Theorem 4.2. I predict (linearity!) that y(t)=4e-2t-(6/5)e-2t+(6/5)cos(t)+(3/5)sin(t). Wow! You can check that this result does satisfy the given initial condition (not hard) and also satisfies the ODE (more difficult).

This is the whole idea!

### Thursday, September 2

 Confession

The standard clerical stuff was done. The information discussed, and much more, is available here and here.

I began by reviewing very briefly the idea of initial value problems (IVP's) for ordinary differential equations (ODE's). An ODE is an equation involving an unknown function of one variable and its derivatives. For example, y´=2t. A solution to such an equation is a function which when substituted into the equation makes it "identically" true. For example, in this simple equation y=t2+C (any constant C) solves the equation. An initial condition (IC) is a value, "t0", for which we require the function to have a certain value, "y0". For example, we could ask that the solution to this ODE satisfy the IC (3,17). Then we choose C so that 17=32+C is true: I guess C=8. The combination of ODE and IC is called an initial value problem. We expect, due to simple physical examples, that IVP's will have unique solutions. The independent variable in this course will frequently be called t for time, so the dependent variable y is a model of some process which depends on time.

Here is one version of the major theoretical result of the subject, the
Existence and Uniqueness Theorem for ODE's
Suppose f(t,y) is a differentiable function in both t and y for (t,y)'s in a region in R2, and that the point (t0,y0) is in the region. Then the ODE y´=f(t,y) has a unique solution going through the point (t0,y0).
Another way of writing the IC is y(t0)=y0.

This is a wonderful theorem, but its usefulness is definitely limited. You shouldn't read "into" it any more than is already there. So let me show this with some examples.

General disclaimer The examples discussed in class will mostly be very artificial, and chosen so that "hand" computation is practical.

Example 1 y´=e(t2) and y(0)=0. Then the solution is y(t)=0te(w2)dw. It turns out (and it can be proved!) that this integral can't be simplified or written in terms of any of the standard functions of calculus, using algebraic means of combination, or even using function composition.

Generally, it is impossible to write solutions of a random ODE in terms of familiar functions. Students should know that this is true, even if we use Maple or Matlab or Mathematica or ... and it even be very difficult or impractical to approximate solutions numerically. Sigh. The examples and methods that are shown here and in Math 244 really are a collection of tricks which work on many ODE's modeling physical situations. They are not guaranteed to work on all ODE's, or even all ODE's derived from fairly simple physical models. But the tricks are very useful in many examples. Now, back to work.

Example 2 The solutions to y´=ty2 are not difficult to get (the instructor of course made some errors). This is a separable ODE. I separated the variables, integrating, cleared up some algebra, and the solutions seemed to be y=2/(C-t2). I verified this in the most direct way possible, by computing y´ and checking that the result was ty2.

Let's look at some specific solutions. The solution satisfying the IC (0,100) has C=2/100, so y=2/(.01-t2). The natural domain of this function is (-1/10,1/10). As t-->1/10-, y-->infinity: the solution explodes. If the IC is (0,2·104) the solution is y=2/(.0001-t2) and it has domain (-1/100,1/100). There is one special case (you need to look critically at the separation of variables process to spot what goes wrong) but if the initial condition is (0,0), the solution is y=0 for all t.
The picture to the right is supposed to an impression of the geometry of the solution curves to this ODE. Notice, please, that the Existence and Uniqueness Theorem is totally correct, but really unhelpful. No inspection of f(t,y)=ty2 seems to show anything wrong (!!) with the function, but many of the solutions don't "live" very long, and they live for different amounts of time. The solutions blow up in the most immediate way (y-->infinity) at different values of t. This ODE is nonlinear. We will mostly concentrate on linear ODE's, where it turns out that such problems don't occur.

The most routine example is y´´+y=0, which is the ODE modeling an ideal spring using Hooke's law. We can still learn things from this example! This is a second order ODE (order refers to the highest number of derivatives needed to write the equation). Here the trick is to guess a solution: try y=ert. Then y´´+y=0 becomes magically ert(r2+1)=0. The exponential function is never 0. Therefore the guess solves the ODE exactly when r2+1=0 (I think this is called the characteristic equation). The roots of this characteristic equation are +/-i.

This is a linear ODE. The particular very very important qualitative consequence is that sums of solutions of solutions of the homogeneous equation are solutions, and so are constant multiples of solutions. Why is this true? Look:

• If y1 is a solution, then y1´´+y1=0.
If y2 is a solution, then y2´´+y2=0.
Add the equations to get y1´´+y1+ y1´´+y1=0. Rearrange and recognize that this is the same as (y1+y2)´´+(y1+y2)=0, so y1+y2 is a solution.
• If y1 is a solution, then y1´´+y1=0.
Multiply the equation by a constant, C, to get C(y1´´+y1)=0. Again, rearrange and recognize that we have (Cy1)´´+(Cy1)=0, so Cy1 is a solution.
This is very pleasant. If we go back momentarily to y´=ty2, then look: even though 2/(.02-t) and 2/(.0002-t) are solutions, the sum, 2/(.02-t)+2/(.0002-t) is not a solution, and 22/(.02-t) is not a solution. The linearity of the ODE is tremendously convenient. Linearity has an elaborate classical name, the principle of superposition. In mathspeak, the solutions of this homogeneous ODE form a vector space. I would like to describe a convenient basis of this vector space.

Let's use linearity. y´´+y=0 has solutions eit and e-it, so if C1 and C2 are any constants, then C1eix+C2e-ix must be a solution. Let me search for a solution satisfying certain initial conditions. Since y´´+y=0 is a second order equation, the IC's will generally have two parameters. I want a solution satisfying y(0)=1 and y´(0)=0. I will call this an initial position solution, yP. What is it?

If yP(t)=C1eit+C2e-it then yP(t)=iC1eit-iC2e-it. We can get yP(0)=1 by having C1+C2=1. We can get yP(0)=0 by having iC1-iC2=0 which is the same as C1-C2=0. After some massive amount of thought, we managed to solve this system of linear equations: C1=1/2 and C2=1/2. Therefore yP(t)=[eit+e-it]/2.

Similarly, we can solve the initial value problem y´´+y=0 with y(0)=0 and y´(0)=1, which I'll call the initial velocity solution, yV(t). It turns out to be yV(t)=[eit-e-it]/(2i). You should check this!

Why would one want to know the yP and yV solutions? Well, they are really neat if you want to "solve" lots of initial value problems for y´´+y=0. The pattern of initial conditions (1 and 0, and 0 and 1) allows us to write a solution for the IVP y(0)=7 and y´(0)=-13. Here it is: 7yP(t)-13yV(t). This is so easy.

Everything I've written is correct, but of course some important things have not been written! First, writing the solutions as complex exponentials conceals important features of the solutions. For example, since y´´+y=0 models simple harmonic motion, the solutions had better be bounded (they shouldn't grow to infinity in any fashion). I think springs don't do that. And maybe my formulation of yP and yV doesn't entirely display this. But we could compare power series or do other stuff and, in some fashion, remember Euler's formula and its consequences. So here read this, please:

Euler tells me ... (An axiom for Math 421)
eit=cos(t)+i sin(t) and cos(t)=[eit+e-it]/2 and sin(t)=[eit-e-it]/(2i)

If we now consider a "general" second order, linear, constant coefficient, homogeneous ODE (by the way, each phrase or word I've just written should make some sense to you and if any do not, you must review material from 244 -- see the syllabus for suggested reading in our text), Ay´´+By´+Cy=0 then I can tell you what to expect about the solutions. If B2-4AC<0, probably sines and cosines will appear. If B2-4AC>0, the solutions can be written with coshes and sinhes. There will also be some exponential factors (damping or otherwise). What happens if B2-4AC=0? It's a mystery, and you should figure it out.

Part I of the course: Escaping the wolves!
The snow was coming down thicker and the chill, at first merely uncomfortable, was becoming a serious problem. With nightfall, we could hear the howls of the wolves coming closer. It was time to try for the Duke's castle and safety! I grabbed the child, and jumped on my horse. I called to the others in the party, "Get on your brave steeds, and ride rapidly to the castle ..."

No, no, no!

### Part I of the course: the Laplace transform

Definition Suppose f(t) is defined for all non-negative t: [0,infinity). Then the Laplace transform F(s) of f(t) is 0infinity e-stf(t) dt.

This is a complicated definition.

1. As I remarked in class, the use of the variables t, f(t), s, and F(s) with Laplace transforms is traditional. Almost every text and reference I've seen uses them.
2. The integral is with respect to t, and t is the "dummy variable" in the integral. For example, consider the simpler but strange combination 13x+y dy. I can antidifferentiate this and get xy+(1/2)y2]13 which is 2x+4: quite straightforward, I think. But this is the same as 13x+w dw. The result will again be a function only of x. The y and then the w are dummy variables. Sometimes this can be a bit confusing, and we will need to pay some attention.
3. The integral defining the Laplace transform is an improper integral, and sometimes the behavior of these can be a bit unintuitive. For example, 1infinity(1/x)dx is infinite, but 1infinty(1/x2)dx is finite. I certainly can't tell this by just looking at a rough graph of the two functions. Officially improper integrals are defined as limits. I'll do the first example using limits, quite carefully! But much of the time I won't be so careful and we can sometimes get weird and even wrong results if we are too careless.
Why should we study Laplace transforms?
• The Laplace transform will turn out to be another "trick" which can be used to solve some ODE's. It is a trick that works really well sometimes, and even other times when it doesn't work terrifically, it gives a different way of trying to solve the ODE. I think it is better to know more tricks.
• The Laplace transform will allow us to sove IVP's for ODE's in one computation. We won't need to find homogeneous solutions, look at initial conditions, etc. This is useful.
• Most interestingly, the Laplace transform will allow us to work with rough functions. In calc 1 and much of what follows in the calculus sequence, we look at functions like cos(t17ln(t)). Of course it turns out that such functions may have little relationship to real world data, which can be very rough: things like pounding a piece of metal with a hammer, for example, or abruptly dropping 200 pounds of suger in a 100 gallon tank of water. Both of these situations are probably best modeled by functions with jumps or corners, and those are not easily handled by the delicacy of 250 year old calculus. Laplace transforms will allow us to model them very well.
Example 1 What is the Laplace transform of f(t)=1? We must compute 0infinitye-stdt. I'll work very slowly, with mathematical propriety: 0infinitye-stdt is the limit as A-->infinity of 0Ae-stdt. Now we need an antiderivative with respect to t of e-st. That's -(1/s)e-st. Then we must evaluate -(1/s)e-st]t=0t=A which is -(1/s)e-sA-{-(1/s)}es·0.
I'm trying to go very slowly here, since it our first computation. Now the exponential function's value at 0 is 1. And also the exponential function dies off very quickly for negative real arguments: the limit of ew is 0 as w-->-infinity, and, in fact, it goes so rapidly to 0 that polynomial growth doesn't stop it: w238ew-->0 as w-->-infinity also.
Since A>0, -(1/s)e-sA-{-(1/s)}es·0= -(1/s)e-sA+(1/s)-->1/s as A-->infinity.
Therefore the Laplace transform of 1 is 1/s.

I observed that the Laplace transform is linear: the sum of Laplace transforms of functions is the Laplace transform of the sum of the functions, and scalar multiplication also works nicely.

The QotD was: if f(t)=5-3t2+9t7?
Here I wanted people to use the linearity of the Laplace transform.

HOMEWORK
Please begin to read the chapter in the book about Laplace transforms, and begin the problems. Do the
Entrance Exam, and have the result of that ready to hand in next Thursday.