**#1 January 17, 2001**

- Why study this subject? (Definite integrals, radius of convergence of arctan x's Taylor series, asymptotics of solutions of a non-linear recurrence.
- The textbook:
*Complex Variables and Applications*by Brown & Churchill, sixth edition. We should cover most of the first 8 chapters +/- some material. - Complex numbers as a+bi with i
^{2}=-1, as ordered pairs of reals, as 2-dimensional vectors. - Complex arithmetic. "Solving" complex linear equations, and learning how to compute the multiplicative inverse of a complex number.

**#2 January 22, 2001**

- Grading will be based on the final (40%), 20% on each of two in-class exams, and 20% homework + work in-class. Generally homework would be due on Monday.
- We briefly discussed the entrance "exam" (the first homework assignment).
- I described the polar representation of a complex number
(argument, modulus) and gave some reasons for thinking cos theta +i
sin theta was e
^{i theta}. Discussion of how to multiply in polar form. - An explanation of why inequalities can't work with complex numbers: you can't write 0<i and expect it will make sense in the same ways such things make sense for real numbers. The substitution: work with modulus or "length".
- Modulus and its properties. How it multiplies, and how the triangle inequality allows estimation. Beginning of analysis of size of a polynomial on the unit circle. A planetary (?) description of how to get an underestimate.

**#3 January 24, 2001**

- In-class work can't be made up (such as the "Locating complex numbers" in the last lecture). Such work will get credit for participation, though it will be graded so students and the instructor can find out what's going on.
- The triangle inequality: |z+w| =< |z|+|w|. the "reverse" (not really): |z|-|w|=<|z+w| : this mechanizes (or algebraizes!) the planetary discussion, and gets the most information by selecting z so |z| is BIG and w so |w| is tiny. Over- and (useful!) under-estimation of the modulus of a certain polynomial on the unit circle.
- Over- and under-estimation of the modulus of a certain rational function on another circle.
- Discussion of the roots of unity, the solutions of z
^{n}=1. There are n of them. Specific discussion of what happens when n=5 and n=6. The geometry of the roots in the complex plane. The tables of powers of these roots. Comment that knowledge of this stuff is important in cryptography and in the "Fast Fourier transform", useful in digital signal processing, and CAT scans and NMR and ... (there's no time to go into this!). - The vocabulary lesson: open disc, interior point, exterior point, open set, closed set, boundary, connected open set, region. The vocabulary "test".

**#4 January 29, 2001**

- Discussion of the pictures requested last time; use of phrases like "open left half-plane", "punctured disc", "annulus".
- Calculus this week: please read chapter 2 of the text: limits, continuity, differentiability.
- Review of limits for
**R**: what is a limit geometrically, precisely, results about limits, some pathology related to limits. - Changing to
**C**: what stays the same, what changes. A complex limit is related to two real limits. Many results and the precise definitions stay the same. Some new phenomena occur in real limits of two variables, but these will hardly be of interest to us in this course. - The nature of infinity for
**C**: there is*one*infinity, and it is "out there", outside of finite clumps of**C**. We introduced the extended complex plane (I forgot to call it "the Riemann sphere") and its association to**C**via stereographic projection, which was discussed. - The definition of derivative and the computation of the
derivative of z
^{2}. This although not itself surprising leads to some weird stuff: immediately, that the derivative of x^{2}-y^{2}+ 2ixy (this is the written out result of squaring x+iy) is just 2x+2iy, which looks strange. This and much else will be discussed on Wednesday.

**#5 January 31, 2001**

- Discussion of the rigid motions and their effects on
**F**, requested last time. - Recall the definition of complex derivative.
- Algorithms for computation of the complex derivative stated.
- The Cauchy-Riemann equations follow from the definition of complex
differentiability. The real and imaginary parts of z
^{2}as an example. Note that (x^{2}+y^{2})+ i (2xy) doesn't satisfy the Cauchy-Riemann equations in too many points. Note that knowing the real part of a complex differentiable function allows in turn (via integrating first partials) deducing the imaginary part. - The Cauchy-Riemann equations along with assumed continuity of partial derivatives of u and v (if f=u+iv and u,v are the real and imaginary parts of f, respectively) are enough to guarantee complex differentiability. This was a fair amount of algebraic manipulation of the top of the difference quotient defining the complex derivative.
- The definition of analytic function. A random pair of polynomials joined with + and i are hardly likely to be analytic (most likely they are complex differentiable at only a finite number of points).

**#6 February 7, 2001**

- Definition of analyticity again.
- F'(z)=0 implies F is constant. Discussion of why this is so: even in "real" calculus, the domain must be connected for this to necessarily hold (two intervals, two constant values). Proof first for a function on a disc using the Cauchy-Riemann equations to show that u and v don't change, then "chain" from one point in a domain to another with discs.
- What if (x
^{4}-3x^{2}y) +i`SOMETHING`were analytic? Would I lie to you? Only if it were*pedagogically*necessary. Or useful. In this case the Cauchy-Riemann equations show that there is no such "`SOMETHING`". This is because there are "compatibility" conditions required for reconstructing a function of two variables from its gradient: if v_{x}=A and v_{y}=B, then (assume differentiability!), A_{y}must equal B_{x}because v_{xy}should equal v_{yx}. - Definition of harmonic function. The real and imaginary parts of an analytic function must be harmonic. The imaginary part is called the harmonic conjugate of the real part.
- Discussion of some of the physical "intuition" behind harmonic functions: steady state heat flow in a thin plate. Heat flow and isothermals, each harmonic, as the real and imaginary parts of an analytic function. The physical situation implies that the boundary "data" determines what happens inside, and that the hottest and coldest part of the plate occur on the boundary.
- What if u (the real part) is (sin x)(cosh y)?
- Quick review of the trig functions as the correct solutions of y''=-y, sine is solution with y(0)=0, y'(0)=1, and cosine is solution with y(0)=1, y'(0)=0, useful because they form a convenient basis of the two-dimensional vector space of solutions to this linear constant-coefficient ordinary differential equation.
- How to solve y''=y? if y=exp(k x), then k is +/-1. But the "correct" solutions of y''=-y are linear combinations of exp(x) and exp(-x): hyperbolic sine (sinh) is solution with y(0)=0, y'(0)=1, and hyperbolic cosine (cosh) is solution with y(0)=1, y'(0)=0, useful again because they form a convenient basis of the two-dimensional vector space of solutions to this linear constant-coefficient ordinary differential equation. Of course, sinh x is (1/2)(exp(x)-exp(-x)) and cosh x is (1/2)(exp(x)-exp(-x)) and we drew the graphs.
- (sin x)(cosh y) is harmonic, and has harmonic conjugate (cos x)(sinh y) (I think!).
- We will see (sin x)(cosh y) +i (cos x)(sinh y) must be sin(z)=sin(x+iy) (see p.70 of the text). I tried to indicate why that must be true by using the addition formula for sine coupled with the Taylor series for sine and cosine to show that sin(x+iy) turns out (symbolically!) to be the same as (sin x)(cosh y) +i (cos x)(sinh y). This all to be justified later.
- Pictures of non-linear mappings: I tried to begin by sketching
some of the geometric effect of z --> z
^{2}: what happens to the line x=1 and then the line y=2. The mapping changes these straight lines in the (x,y) plane to parabolas in the (u,v) plane. One can see some of the qualitative effect of the mapping by viewing it in "polar form", where modulus is squaring and central angle is doubled. I tried to explain why the parabolas in (u,v) intersected twice. - Please begin chapter 3 of the text.

**#7 February 12, 2001**

- Went over the example from last time: how z
^{2}maps two lines. - Showed how the polar description of z
^{2}helps with the homework problem of describing the image of the triangle with vertices 0, 1+i, 1-i under the mapping of z^{2}. z^{2}doubles "central angles" (the thetas) but only "mod 2 Pi". - I handed out "The return of F". We slowly worked through it, first
figuring out the polar size of F (what interval of r and theta
contains F), then a rough picture of F's image under z
^{2}and 1/z. I emphasized that we could only draw rough pictures since the pictures supplied only qualitative data. We were able after some discussion to reach some agreement on what to draw. I mentioned that there were some finer properties of the images we wouldn't discuss in detail in this course, such as the facts that the angles were preserved under the mappings (so right angles went to right angles) and orientation was preserved, so the images were all readable versions of F. - I handed out "the revenge of F" and students were asked to find
out what happens when F is mapped by z
^{3}. We agreed it would all be a mess if I had asked for z^{10}: there would be overlapping images, etc. - Introduction of the most important analytic function, exp. If
exp(z)=e
^{z}=e^{x+iy}=e^x(cos y + i sin y), then we should define the exponential function as one whose real part, u(x,y), is e^{x}cos y and whose imaginary part, v(x,y), is e^{x}sin y. We proceeded using that definition.- exp(z) is analytic (we used the Cauchy-Riemann equations).
- exp'(z)=exp(z) (we used some form of f'(z) from before). All of the nice calculus properties of exp are still correct.
- exp(z
_{1}+z_{2})=exp(z_{2})exp(z_{2}). This was verified with tedious manipulations using the addition formulas from trigonometry for sine and cosine. I noted that exp(0)=1 by direct computation, so that exp(-z)=1/exp(z). All of the nice algebraic properties of exp are still correct.

- A detour: when is e
^{z}=-3? After some thought, we found that the real part of z was ln(3), but the imaginary part of z had to be Pi,**mod 2 Pi**, so that there were many such z's: any z of the form ln 3 + (2 n Pi +1)i (here n is any integer), a very strange situation if one is not used to it.- exp is periodic, with period (2Pi)i (easy to verify since the appearance of y in both u and v is through 2Pi periodic functions).

- When is exp equal to 7? Again, because of the interesting periodicity of the analytic version of exp, the answer is the sequence ln 7 + (2n Pi) i (again, n is any integer).

**#8 February 14, 2001**

- Review of the exponential function and discussion of its geometric effects.
- Exp maps horizontal lines (Im z constant) to rays (halflines) emanating from the origin. It maps vertical lines (Re z constant) to circles centered around the origin. Connection of the picture to the periodicity of exp.
- Explicit solution of e
^{z}=`SOMETHING`leads to formulas for z which can be used to define an inverse to the exponential function, log: log z = ln |z| + i arg z. Here ln is the standard "natural log" from calculus. - Computation of log z of course gives many answers, a consequence of the fact that exp is (2Pi i) periodic. Historically one could then discuss log as a "many-valued function", but this may be irritating to people who want one answer. Another solution is to restrict the "arg" in the formula for log.
- The text's notation: Log z is ln |z| +i Arg z, where the argument is chosen to be in the half-open interval (-Pi,Pi], the equivalent of "cutting" or "slitting" the plane along the negative real axis (and also excluding 0). Comment that other "cuts" may also be used in other situations.
- Discussion of problem 6, section 22. Here a function g(z) is
defined in the set r>0 and 0< theta < 2Pi by g(z)=ln r + i theta. Use
of the polar form of the Cauchy-Riemann equations shows that g is
analytic in this cut plane (cut along the positive real axis), and
a further formula from "the polar section" shows immediately that
g'(z)=1/z. g is another variant of log. Then the problem asks that
g(z
^{2}+1) be considered for x>0 and y>0. This, the core difficulty of the problem, comes down to: why is this composition defined? That is, why, if z is chosen in the first quadrant, does z^{2}+1 end up in the domain of g, that slit plane? Since the imaginary part of z^{2}+1 is 2xy, this imaginary part is positive because we know both x and y are positive, so in fact z^{2}+1 is in the upper half-plane, a part of the domain of z. Then it is easy to finish the problem with the chain rule. - Chapter 3 abounds with information and formulas about many
functions: trig, inverse trig, hyperbolic, inverse hyperbolic, and
complex powers. Here I'll have a limited discussion of sine, cosine,
and z
^{c}. - We already know sin(z)=sin(x+iy)=sin(x)cosh(y)+i cos(x)sinh(y), so that cosine, which "should" be the derivative of sine, has the formula cos(z)=cos(x+iy)=cos(x)cosh(y)-i sin(x)sinh(y). These formulas are different from but equivalent to those in the text.
- The complex sine function has values which are unnerving to the students new to the subject: we "solved" sin(z)=2. The answers are z= Pi/2 + 2Pi n + (1.31695...)i and Pi/2 + 2Pi n - (1.31695...)i, where n is any integer. The numbers 1.31695... and -1.31695... occur because they are approximate roots of cosh(y)=2.
- When y=0, sin(z) is the old sin(x) from calculus. When x=0, sin(z) is i sinh(y), an unbounded function. So sine is unbounded as an analytic function. It is still 2Pi-periodic, however.
- Students were asked to solve cos(z)=i (an approximate value of arcsinh(1) was given: .881373...).
- Current plans are to have the first exam in two weeks (February 28) covering material up to what is done in class on February 21. Also, in addition to availability before and after class and by appointment, the instructor will try to have an evening "office hour" on Tuesday, February 27. Review material will be given out.

**#9 February 19, 2001**

- Further discussion of log: what is a "branch" of log (an inverse function to exp in a connected open set). There are infinitely many branches of log, and their derivative all must be 1/z.
- Contrast of the definition of sine and cosine via real and imaginary parts and the definition in the text using linear combinations of exp(iz) and exp(-iz).
- a
^{b}is defined to be exp(b log(a)), and the "multivalued" nature of log provides many difficulties for a^{b}. Discussion of sqrt(z) as a mapping: it halves central angles, and it has two values. The principal branch of sqrt(z). - What is (1+i)
^{sqrt(3)}? It has infinitely many values. - Complex integrals as just the complex linear extension of real integrals: If f is a continuous complex-valued function of the real variable t, with t varying between a and b, then the integral of f from a to b is the sum of (the integral of the real part of f from a to b) + (the integral of the imaginary part of f from a to b).
- Estimation of integrals: the complex-valued triangle inequality states that (|the integral of f from a to b|) is less than or equal to (the integral of |f| from a to b). I discussed why this statement was reasonable by resorting to a Riemann sum analogy. A real proof is on page 89 of the text, using a neat algebraic trick. c)We will integrate functions over curves in the plane. A curve or contour is a parameterized object, z(t)=x(t) + i y(t), where x(t) and y(t) are continuous functions of t for t between a and b. We discussed "simple curve" (no self-intersections) and "simple closed curve" (beginning=end, but no other self-intersections). It is sometimes not clear what the "inside" and "outside" of a simple closed curve are.
- Today's question" what is the limit as R approaches +infinity of the integral from 0 to 1 of exp(iRt) dt?

**#10 February 21, 2001**

- Discussion of last class's quiz: the limit as R tends to infinity of the integral from 0 to 1 of exp(i R t) dt. Compute via antidifferentiation and estimate the modulus of the result using the triangle inequality on the top.
- Definition of the integral of a complex function over a contour. The functions will be piecewise continuous, and the contours will be piecewise differentiable in this course. In fact, with rare exception, they will be line segments, circular arcs, or concatenations of these (one following the other).
- Examples: the integral of |z|
^{2}from 1 to i on two paths: a circular arc centered at 0 and a straight line segment. The results are different. - The value of the integral does not depend on parameterization.
- The ML inequality: this is a simple consequence of item 6 in the last lecture, and recognizing that the integral of |f'(z)| over a curve is the length of the curve.
- Estimation of integrals as the contour changes (this is all in
preparation for things happening later: I find this an awkward way to
learn mathematics but it is supposedly more efficient than a more
historic approach (ontogeny replicating phylogeny). We discussed
Example 5 on page 101 of the text, and then looked at a rational
function (something like
(z
^{3}+2z^{2}+3)/(z^{5}+287z^{3}-22) ) integrated over a circle of radius R as R tends to infinity. The estimations are easy, provided one remembers to multiply by the length, and estimate a quotient by an overestimate of the top and an underestimate of the bottom. The triangle inequality and reverse triangle inequality frequently can be used to get estimates that are good enough. - Back to examples: we integrated z
^{2}from 1 to i along the two paths we tried before (a circular arc and a straight line segment). We got the same answer. This answer turned out to be indentical to what we would get if we took F(z)=(1/3)z^{3}and computed F(i)-F(1). - Statement of a theorem: if we have a connected open set U and if f is analytic in U and if F is analytic in U with F'=f, and if C is a contour in U, then the integral of f over C is just F(end)-F(start). Then we observed that if this result (as yet unproved in this class!) is correct, simple consequences follow: the integral of f will be path-independent (only depend on the endpoints of C) and the integral of f over a simple closed curve C in U will be 0 (since start=end).
- Example: the integral of 1/z over the unit circle oriented counterclockwise centered at 0. Since I don't know an antiderivative, we computed this by parameterization and got 2 Pi i. The theorem does NOT apply, and, further, we have just found algebraic justification that there is no branch of log in any domain containing the unit circle (for, if there were, then that "log" would have derivative 1/z, and by the result stated/not proved just above, the integral would be 0).
- Today's question: what is the integral of sqrt(z) from 0 to i (do either by parameterization or by finding an antiderivative).

**#11 February 26, 2001**

- Proof of the statement: in a connected open set, with F'=f, f continuous, then the integral of f over a contour C equals F(end of C)-F(start of C). The proof was gotten by applying the chain rule and the Cauchy-Riemann equations.
- Thus for such f's, the integral over closed curves is 0 and the integral depends only on the end points of the curve (and the latter two statements are logically equivalent).
- If either the integral over closed curves is 0 or the integral depends only on end points, then f must have an antiderivative, that is, there is an analytic F with F'=f. The proof was as given in the text.
- Creation of F was similar to manipulations in mathematical physics, choosing a ground state, creating a potential, etc.
- As examples we considered again the integral of 1/z around the
unit circle and the integral of 3/(5z
^{8}) around the unit circle. The former has no antiderivative, since the integral is 2 Pi i, nonzero, and the latter has integral 0 since the function does have an antiderivative (using standard calculus manipulations). - We stated and proved a version of Cauchy's Theorem, that the integral of a function f over a simple closed curve is 0 if the function is analytic on and inside the curve. A proof using Green's Theorem was given. Cauchy's Theorem was announced to be the chief result in the course.
- Some very elementary uses of Cauchy's Theorem were given, including deforming some contours without altering the values of the integrals as long as the integrand is analytic inside the region where the deformation occurs.

**#12 February 28, 2001**

- The first exam.

**#13 March 7, 2000**

*Oh well, another class missed! Snow is "lovely, dark and
deep," but driving and even walking can be treacherous, especially for
evening classes. *

- A review. First, the logical equivalence of results about the existence of an antiderivative, integral = 0 on closed curves, and path independence. Second, statement of a version of Cauchy's Theorem: the integral around a simple closed curve of a function analytic in and on the curve is 0. Some possibly instructive examples were discussed.
- The Goursat confession: the proof of Cauchy's Theorem presented during the last class used Green's Theorem, and assumed the continuity of the partial derivatives of u and v, the real and imaginary parts of f. The assumption of continuity is not necessary. A clever proof involving subdivision (in the text) shows this. This variant is called the Cauchy-Goursat Theorem.
- Definition of "simply connected". A domain is simply connected if the inside of every simple closed curve in the domain is completely contained in the domain. We explored examples of domains which were and were not simply connected.
- Another version of Cauchy's Theorem: If a function is analytic in a domain which is connected (one piece) and simply connected (no holes) then the integral of the function around any closed curve in the domain is 0. Also, by the previous result, such functions must have antiderivatives in the domain.
- More "physical intuition": if f = u + i v is complex analytic, then u and v are harmonic. They represent steady state heat distributions on, say, a thin plate where some sources and sinks of heat are given on the boundary of the plate. Therefore nature (?) determines the temperature at a point inside the plate from the boundary data. The transition from boundary data to temperature at an inside point should (?) be given by some mathematical formula. Here we present a complex analysis approach to this idea.
- Discussion and substantial proof of the Cauchy Integral Formula,
winding up with the statement in the text. What's used are ideas about
deforming closed curves defining integrals of analytic functions
combined with the 2 Pi i result (the integral around a circle not
containing z
_{0}of 1/(z-z_{0}) is either 2 Pi i or 0. - An example of the CIF: integrate exp(z)/z around the unit circle and get 2 Pi i. The real and imaginary parts, parameterized in the standard way, give definite integrals with values we now know. These values can't be found (symbolically) by the current version of Maple.
- Consider the function exp(z)/((z-5)(z+4)). I found the integral around various closed curves (such as a triangle whose interior didn't contain 5 or -4, and a small circle around 5). Then I asked students to compute the integral of this function around a small circle centered at -4, and a dumbbell-shaped region whose "balls" contained, respectively, -4 and 5.
- The first exam was returned. The grades on the first exam varied
from 20 to 97. A comprehensive grade, including the exam grade,
homework, and in-class quizzes appropriated weighted as mentioned
earlier, was also computed for each student who took the exam (there
were 17 such). These grades ranged from 22.255 to 99.1994. The
following scale was used to go from number to letter grade in both
cases:
[0,50): F; [50,55): D; [55,65): C; [65,70): C+; [70,80): B; [80,85): B+; [85,100]: A. I don't like students to be surprised about their course grades so thought I would indicate progress so far seriously, with some detail.

**#14 March 19, 2001**

- I reviewed the foundational theorems.
- Cauchy's Theorem: If f is analytic on and inside a simple closed curve C, then the integral of f over C is 0.
- The Cauchy Integral Formula (CIF): If f is analytic on and inside
a simple closed curve C, and z
_{0}is inside C, then f(z_{0}) = (1/(2 Pi i)) the integral around C of f(z)/(z-z_{0}). - (Deformation of closed curves; lemma 2 on page 118 of the
text). Suppose C
_{1}and C_{2}are simple closed curves, and C_{2}is inside C_{1}. If f is analytic on C_{1}and C_{2}and the region between them, then the integral of f over C_{1}equals the integral of f over C_{2}.

A whole series of very famous, very strange, and "powerful" results are now available to us. - We tried taking the derivative of the CIF. This involved some
limits, and real care was needed. We made estimates on how big/how
small/how stable certain remainder terms were. This is the key here
and in many developments later in the course. The algebra is slightly
intricate but really not too surprising. The result is a
differentiated CIF, leading to an expression for f'(z
_{0}): it is (1/(2 Pi i)) the integral around C of f(z)/(z-z_{0})^{2}. It turns out that this is the same result one would get by just straightforwardly differentiating the CIF without paying any attention to estimates of errors, etc. In fact, one can continue the process and magical formulas for derivatives appear: if n is a positive integer, f^{(n)}(z_{0}) equals (n!/(2 Pi i)) the integral around C of f(z)/(z-z_{0})^{n+1}. This is the differentiated CIF, and conceals a wonderful result: if f is analytic (recall the definition: only an assumption about f', the first derivative), then*every*derivative of f exists. - Here's a further surprising result. Suppose f is just a continuous function in a domain U, and the integral around every simple closed curve of f is 0. Then we know from previous results that f has an antiderivative (a "potential"), F: F'=f. But F is analytic, so all derivatives of F exist, in particular, F''=f' exists so f is analytic. This is Morera's Theorem: if all integrals around simple closed curves of a continuous function are 0, then the function is analytic. The result is a sort of converse to Cauchy's Theorem. So results about integrals imply results about derivatives!
- The following statements are logically equivalent (for a function
f in a simply connected domain):
- f is analytic (that is, f' exists at every point).
- If f=u+i v, then u and v have continuous first partials, and u and v satisfy the Cauchy-Riemann equations.
- f has derivatives of all orders.
- The integral of a continuous f around all simple closed curves is 0.

- In calculus, control of f over, say, intervals from -1 to 0 and 7
to 10 doesn't necessarily lead to estimates of the size of f' at
2. That is, if we knew that f was very small in [-1,0] and [7,10] then
we couldn't necessarily predict anything about f'(2): simple pictures
tell you that. In complex analysis, operating on a different planet
than calculus, such a result
*is*true. By specializing the differentiated CIF to circles we got the Cauchy estimates: |f^{(n)}(z_{0})| is less than or equal to n!/(R^{n}) M_{R}where f is analytic on and inside a circle of radius R centered at z_{0}and M_{R}is the maximum value of |f(z)| on the circle. The notation is that of the text. This means, for example, if M_{R}is very small, then all the derivatives of f at z_{0}approach 0. Startling. - Even more startling was my attempt to verify that sin x (the sine
function from calculus) was constant. My "logic" was as follows: (f
will be sin x) |f'(z
_{0})| was less than or equal to (1!/R) M_{R}, and in this case M_{R}was always 1. But R could be anything, so f' had to be 0. Since the specific value of z_{0}never mattered, f' was 0 everywhere. So the function is constant. Hmmm ... well, the Cauchy estimates aren't valid in real calculus, and the complex sine function is*not*bounded, so the whole deduction is invalid here, but the logic with the estimates etc. is valid, and we have proved Liouville's Theorem: if f is analytic in the whole complex plane ("f is entire"), and if there is some M with |f(z)| less than or equal to M for all z ("f is bounded"), then f is constant. ("A bounded entire function is constant.") This result is really weird, and should convince you that we are not only on another planet, but on another galaxy. I spent some time seeking counterexamples to Liouville's Theorem (sin z is not one!). I looked for "bumps": 1/(1+x^{2}) is a nice bump (the Cauchy distribution in statistics) and using the principle "Change x to z" to create an analytic function, I suggested this was a counterexample. I was "corrected" by students who noted that the resulting function was not analytic at i and -i. I then moved to the famous bump function defining "erf": exp(-x^{2}). The analytic function exp(-z^{2}) was displayed as a candidate for a counterexample to Liouville's Theorem. After contemplation, the suggestion was made that this function on the imaginary axis where z=i y, so exp(-z^{2}) becomes exp(y^{2}) which is rather unbounded. So I gave up the business of finding counterexamples. - The problem of the day: consider the function
exp(2z)/(z-5)
^{7}. I asked people what the integral of this around the unit circle was and they replied, "0". So this was too easy. I asked what happened if the circle was deformed a bit, and again the answer seemed to be 0. So then I asked what the value of the integral of this function around a circle centered at 5 was, and emphasized that very little computation needed to be done.

**#15 March 21, 2001**

- How many ways can we compute the integral of 1/z
^{4}over the unit circle?- By direct parameterization (z=e
^{i theta}, dz= ... etc.). - By finding an antiderivative (-1/(3z
^{3})) in the punctured (0 taken out) complex plane. - By using the CIF for derivatives for f(z)=1, n=3, and
z
_{0}=0.

- By direct parameterization (z=e
- Quick review of what we did last time, ending with Liouville's Theorem.
- Student specification of a polynomial followed by the instructor doing some analysis of the modulus of the polynomial leads to the conclusion that the polynomial must have a root.
- Statement and proof of the Fundamental Theorem of Algebra, and some discussion of its history.
- Review of Taylor's Theorem from calculus, followed by a statement of "Taylor's impression": that f(x) and its Taylor series centered at 0, say, must be related -- that they should indeed be equal.
- Consideration of the function whose value at x=0 is 0 and whose
value else where is exp(-1/x
^{2}). This function can be differentiated arbitrarily often (away from 0 the chain rule applies easily) and at 0 use of l'Hôpital's rule shows that any derivative exists and is 0. A formal proof of these claims would use mathematical induction combined with some recognition of the algebraic structure of the n^{th}derivative away from 0. - This function's Taylor series at 0 always converges (each term is 0!) and the sum is the identically 0 function, which is very different from the original function. So "Taylor's impression" is not valid at 0.

**#16 March 26, 2001**

- I discussed the solution of four homework problems (§ 38: 5c
& 8 and § 40:2a & 3). I tried to emphasize (would
"harangue" be a better word?) that students should indicate why and
how their computations work in their written solutions. Certainly I am
interested in the answers (sqrt(2) Pi
*is*interesting!) but checking the process is more valuable to me. In this course, there is frequently an intricate interaction between the geometry (curves, domains) and the algebra or analysis of the functions (where are they singular, what are their growth properties) and some precision is needed when fundamental results such as Cauchy's Theorem, the CIF, or the "deformation of closed curves" are applied. So**please explain your answers clearly!** - Back to work: we saw last time that there need be little connection between a Taylor series and the function, and that indeed they need only be equal at the "center" of the Taylor series. In this course, much better results are true. To state them precisely, and to understand their verification, we'll need to a bit more precise about various terms.
- "An infinite series converges if its sequence of partial sums converges." Although everyone who has en{joyed/dured} a calculus course has heard such a sentence, we still should take it apart and try to understand it.
- I copied the notation of the book quite closely in what follows. We
defined "an infinite series" as a infinite sum. This "sum" had to be
*symbolic*because I, a mere mortal, don't have the time and ability to actually add up an infinite number of numbers. Then we defined the "sequence of partial sums", each term of which*could*in theory be computed by a human (maybe assisted by silicon!). We discussed what "a sequence" means in general (a complex-valued function whose domain is the positive integers). We defined what "a sequence converges" means: I tried to indicate what this meant geometrically in terms of all but finitely many terms being within a disc centered at the limit of the sequence. I also gave the more traditional algebraic definition (epsilon, etc.). - By comparing discs with inscribed and circumscribed squares, I tried to verify that a sequence converges exactly when its real and imaginary parts separately both converge.
- We discussed absolute convergence of a complex series, and using some neat but intricate logic (borrowed from the text!) verified that if a complex series converges absolutely, then it must converge.
- Finally, we considered the geometric series:
1+z+z
^{2}+z^{3}+... Here the partial sums can be explicitly computed (a rather rare occurrence). The formula for S_{N}, the sum as n ranges from 0 to N-1 of z^{n}, is gotten by multiplying this sum by z, subtracting the result from the original and simplifying. I think it is S_{N}= 1/(1-z)-(z^{N}/(1-z)). We then get the following: the distance between S_{N}and 1/(1-z) is less than or equal to (rho^{N}/(1-rho)) for all z with |z| less than or equal to rho. When rho is itself less than 1, we get convergence, and S_{N}approaches the sum of the geometric series, 1/(1-z). The*rate*of approach is meansured by the difference, which is certainly overestimated by (rho^{N}/(1-rho)). This estimate is valid for all z's inside the closed disc of radius rho centered at the origin. - I asked people where the sum of exp(nz) (n from 0 to infinity) converged, and what this sum was.

**#17 March 28, 2001**

- Again we considered the partial sums of the geometric series. We analyzed what happened when rates of convergence of the partial sums are considered: the rate of convergence for all z's with |z| less than or equal to rho, a positive real number less than 1. I analyzed the logic statement (4 quantifiers thick!) involved in convergence of the geometric series. I tried to illustrate how it could be "simplified" in practice to get easier error estimates, and that this simplification was called "uniform convergence".
- We considered the example of a sequence of functions:
f
_{n}(x)=1/(1+nx^{2}). Certainly each one of these functions is continuous, and the graph is simple. The limit of these functions is a function whose value at 0 is 1 and whose value elsewhere is 0. So the limit is not continuous. This horrid example can also be seen as a failure of limits to interchange with equal effect: the limit as x approaches 0 and the limit as n approaches infinity. So care needs to be taken to not have problems similar to those of this bad example. **Magic**We did the manipulations which take the CIF and bring the Taylor series. An interchange of integral and infinite sum is required. This we justified using uniform convergence of geometric series in a suitable domain. I mentioned Weierstrass, my professional great^{4(?)}grandfather. He is known for explaining clearly and carefully why the limit interchanges which occur in the manipulations just described are correct and do not lead to the problems of the bad example mentioned previously.- Example: the exponential function with its Taylor series centered
at 0. I asked people to guess at the Taylor series of the function
f(z)=exp(z
^{3}) and then to tell me what f^{(427)}(0) was. - I'll not follow the order of the text for a while: §45 will be followed by §48 and then §49. Then I'll go back and discuss Laurent series (in §46 and §47 of the text).

**#18 April 2, 2001**

- A quick review of last time: the convergence of geometric series allows uniform error estimates in discs with radius less than 1, so there the series converges absolutely and uniformly. Algebraic manipulation of the Cauchy integral formula allows us to deduce that inside any disc in which a function is analytic, the Taylor series for that function (centered at the center of the disc!) converges to the function.
- Suppose we assume that a power series with general term
a
_{n}z^{n}converges at w_{0}(a non-zero complex number). Then a_{n}w_{0}^{n}must converge to 0, since the sum converges. A weaker result then follows: |a_{n}w_{0}^{n}| is bounded by some positive constant M. Now consider the infinite series at some complex number w_{1}with |w_{1}| < |w_{0}|. If rho is the quotient |w_{1}/w_{0}|, then rho < 1, and the series with w_{1}is overestimated by the series whose general term is M rho^{n}. This geometric series certainly gives some control over the rate of convergence of the original power series for*any*z in the disc centered at 0 with radius |w_{1}|. - If a power series with general term
a
_{n}z^{n}converges at w_{0}(a non-zero complex number), then, given w_{1}with |w_{1}| < |w_{0}| and given epsilon > 0, there is a positive integer N so that the partial sum P_{N}(z) of the first N terms of the power series (the sum as n goes from 0 to N-1) differs from the sum of the series by an error term, Err(z,N), and this error term has modulus less than epsilon*for all*z with |z| =< |w_{ 1}|. Indeed, the power series converges uniformly and absolutely for such z. - We used this result for extremely quick verification of really
amazing results. Suppose f(z) is the sum of the power series, where it
converges.
- f must be continuous. Verification: we need to show, given
z
_{1}and a positive epsilon, that there is delta > 0 so that when |z-z_{1}| < delta, then |f(z)-f(z_{1})| is less than epsilon. Replace each value of f by the polynomial P_{N}+ Err, and select N large so that the error terms are small no matter what z and z_{1}are. Then continuity of f comes down to continuity of the specific polynomial P_{N}. - f must be analytic. Verification: we'll use the criterion
suggested by Morera's Theorem. Integrate f around a closed curve. The
integral of P
_{N}must vanish since polynomials are analytic. The integral of the other part can be estimate by the familar ML inequality. Certainly L doesn't change as N changes, and by selecting N large enough and using uniformity of the estimate of the size of Err, we can make M as small as we like. Therefore the integral of f over closed curves must be 0, and f must be analytic. - Integrate f multiplied by 1/(z-z
_{0}) over a simple closed curve C where z_{0}is inside C. Again, the polynomial part integrates to its value at z_{0}and the f part integrates to f(z_{0}) since f is analytic. The Err integral is small when N is large. Therefore we just get back that f(z_{0})*equals*the sum of the power series. If we change 1/(z-z_{0}) to 1/(z-z_{0})^{2}things change. We see that f'(z_{0}) equals the sum of n a_{n}z^{n-1}: that is, the power series can be differentiated term-by-term, and the result is the derivative of the function.

- f must be continuous. Verification: we need to show, given
z
- Digression: we glanced at certain Fourier series (the sum of
a
_{n}sin((something)x). With simply hypotheses on the a_{n}'s, these series converge uniformly. But even with a choice of a_{n}like 1/n! (approaching 0 very rapidly) it may be impossible to differentiate the series term-by-term, even though we'd like to since it would match our naive expectations (take "something" to be, say, (n!)^{2}). - We've verified many results already.
- Power series must converge in discs. Away from "the edge", each series converges absolutely and uniformly, and the resulting sum is an analytic function.
- The derivative of the sum is the sum of the derivatives. By
repeating this result and evaluating at the center of the disc, we see
that power series
*must*actually be the Taylor series of the function which is their sum. - Any two power series which have the same sum have the same coefficients.

_{0}is at least the distance to the boundary of U from z_{0}: thisfollows from the manipulation of the CIF done last time. - We computed the sum of the series whose general term is
1/2
^{n}by looking at the geometric series whose general term was z^{n}and sum was 1/(1-z). If we differentiate and multiply by z and substitute 1/2 for z we get an infinite series whose n^{th}term is n/2^{n}. The same manipulations applied to 1/(1-z) yield the function z/(1-z)^{2}whose value at 1/2 is 2. So the sum of n/2^{n}is 2. I asked people to find the sum of the series whose n^{th}term is n^{2}/2^{n}.

**#19 April 4, 2001**

- A review of conditions equivalent to analyticity.
- Being analytic is a very special condition: why would people be interested in this? I Consider a "square wave", a function which is 1 from -1 to 1 and 0 otherwise. I remarked that one way to study this is using the "Fourier transform", which tries to find the amplitude of periodic functions which might add up to the square wave. Periodic functions might be sine or cosine, and we can combine these by looking at the integral from -infinity to +infinity of the square wave multiplied by exp(ixt) dt. I computed this, and after some effort, recognized the result as (constant) (sin x)/x. The function (sin x)/x is actually an entire function, because the power series for sin x converges everywhere, and has a factor of x in all of its terms. Therefore the result is a power series which converges for all x and it therefore represents an entire function.
- The remainder of the lecture was spent discussing the existence and uniqueness of the Laurent expansion of functions analytic in an annular region. We first defined an annulus, which is the open connected set of points between two concentric circles (r < |z| < R). Special cases occur when r=0 or R=infinity.
- The CIF contour for a point in the annulus can be deformed into
two concentric circles. The integrand for each of the circles can be
"expanded" using geometric series, and then the result can be
integrated term-by-term. The validity of the process is much the same
as was discussed for power series. The resulting "expansion" is a sum
of complex constants multiplying integer powers of (z-z
_{0}), where now the integers are*all*integers, ranging from -infinity to +infinity. There is only one such expansion (this result needed facts about the value of the integral of z^{n}around circles centered at the origin: 0 for n not equal to -l and 2 Pi i when n is -1. - The Laurent expansions are valid only in specific annului. One
function can have distinct Laurent expansions in different regions. As
an example, I considered the function (2z+3)/(z(z+1)
^{2}). It is analytic in two different annuli centered at 0. We quickly got the Laurent expansions in powers of z by observing that (2z+3)/(z(z+1)^{2})= ((2z+3)/z)(1/(z+1)^{2}=(2+(3/z)(1/(z+1)^{2}, and an expansion of 1/(z+1)^{2}can be obtained by differentiating 1/(z+1)=1-z+z^{2}-z^{3}... - I hope that review problems for the second exam will be given out at the next class (Monday, April 9) and the second exam will given on Monday, April 16.

**#20 April 9, 2001**

- I reviewed what we knew about power series and Taylor series.
- Power series converge in discs. Inside, such series converge absolutely and in closed subdiscs, such series converge uniformly.
- Manipulation via algebra (add, multiply, subtract, divide) and calculus (integration, differentiation) of convergent power series (treating them as "big" polynomials) yields convergent power series.
- Any convergent power series
*is*the Taylor series of its sum, which is an analytic function. In turn, the Taylor series of an analytic function centered at a point z_{0}must converge in the largest disc centered at z_{0}in which the function is analytic.

- And also Laurent series.
- Each function analytic in an annulus (r < |z-z
_{0}| < R) has a unique expansion as a sum of*all*integer powers of z-z_{0}. The coefficients for such an expansion can be written in terms of certain integrals of the function together with appropriate powers of z-z_{0}. - The convergence of the series is qualitatively much the same as with power series> All operations from algebra and calculus, treating the series as "really big" (doubly infinite!) polynomials are allowed.

- Each function analytic in an annulus (r < |z-z
- I asked if there were any questions about today's homework. We
discussed the solution of some problems. Perhaps the most interesting
was the behavior of (cos z)/(z
^{2}-(Pi/2)^{2}). Here I tried to stress that since cos (Pi/2) was 0, the Taylor series for cosine centered at Pi/2, which is a convergent power series, has terms all of which have a factor of (z-Pi/2). Therefore cos z = (z-Pi/2) multiplied by a convergent power series. That series, in turn, is an analytic function. The quotient of that function divided by (z+Pi/2) is analytic near Pi/2, and its value at Pi/2 is -1/Pi, as predicted by an earlier part of the problem. This technique of factoring out powers and recognizing that convergent power series are analytic is very useful (a neat reflection of l'H ô pital's rule). - I discussed how to integrate the Laurent series in a punctured
disc. The result only showed the -1
^{st}coefficient multiplied by 2 Pi i. I called this the Local Residue Theorem. - I defined the phrase: "A analytic function has an isolated
singularity at z
_{0}" and defined the residue of such a function as such a point. - I computed (first wrong, then correctly!) the residue of
1/(1+z
^{2}) at i. This was applied together with an estimate of the vanishing as R gets large of the integral of 1/(1+z^{2}) over an upper semicircle of radius R and center 0 to get the value of the improper integral of 1/(1+x^{2}) from -infinity to +infinity. - I asked what the residue of (sin z)/z at 0 was, and also handed out review problems for the second exam, to be given in a week.

**#21 April 11, 2001**

- We reviewed the Local Residue Theorem, along with some examples of residues, such as (sin z)/z at 0, exp(1/z) at 0, etc.
- I stated and proved a version of the Residue Theorem. The proof involved cutting in from the surrounding contour and circling each isolated singularity. The cuts canceled out, and integration around the finite number of isolated singularities was handled with the Local Residue Theorem.
- I moved on to an analysis of isolated singularities: what behavior
was possible. Examples were also discussed. The principal part of the
Laurent series was defined to be the sum of the negative powers of the
Laurent series.
- With
*no*negative terms in the Laurent series, the isolated singularity was removable: a limit existed at the singularity and the function extended with that limiting value at the singular point was actually analytic. I also stated and proved Riemann's removable singularity theorem: a function bounded near an isolated singularity has a removable singularity. - With only a finite number of terms with negative powers in the Laurent series, the function must approach infinity near the singular point: that is, the modulus of the function gets arbitrarily large. Such a singularity is called a pole.
- We began the analysis of those isolated singularities with an infinite number of non-zero terms in the principal part: the essential singularity. Some work with exp(1/z) near 0 revealed bewildering behavior which will be explained next time.

- With
- I asked students to compute the integral of (sin(1/z))/z^3 over the unit circle.

**#22 April 16, 2001**

- The second exam.

**#23 April 18, 2001**

- I continued the classical description of the types of behavior
near an islated singularity. The assumption is that f is analytic in
0< |z-z
_{0}|< R and f's Laurent series is the sum of c_{n}(z-z_{0})^{n}, as n varies through all integers. The principal part (PP) of the singularity is the sum of the terms corresponding to negative integer n's.Name Algebraic condition Behavior near z _{0}Name of result Example Removable singularity PP is all zero. f can be extended at z _{0}so that the extension is analytic near z_{0}Riemann's removable singularity theorem: if f is bounded near z _{0}, the singularity is removable.(sin z)/z Pole PP has a finite number of non-zero terms. If {z _{j}} is any sequence of complex numbers with limit z_{0}, then |f(z_{j})| is always infinity.Just factor out a power of (z-z _{0}): f has a pole at z_{0}.1/(z ^{58})Essential singularity PP has an infinite number of non-zero temrs. Given any complex number M (and M can also be the "extended complex number", infinity) there is a sequence {z _{j}} whose limit is z_{0}so that the sequence f(z_{j}) has limit MThis is called the Casorati-Weierstrass Theorem. exp(1/z) - We returned to applications of the Residue Theorem, which was restated. The principal application to be discussed here and in the next few lectures is evaluation of real definite integrals.
- We looked at the integral from 0 to 2Pi of 1/(2-sin theta). By writing sin theta =(1/(2i))(exp(i theta)-exp(-i theta)) and z=exp(i theta). The value is (2 Pi)/sqrt(3).
- I discussed how to compute the integral of 1/(1+ cos(theta)
^{2}) from 0 to 2Pi, and how practical this computation is. - I asked if the residue of a function at a point is linear or is multiplicative in the function.

**#24 April 23, 2001**

- Computing improper integrals using the Residue Theorem.
- The integral of 1/(1+x
^{2}) from -infinity to +infinity. ("Closing up" the interval from [-R,R] with a semicircle, estimation of the integral on the semicircle, computing the residue at the isolated singularity.) - The integral of cos(mx)/(1+x
^{2}) from -infinity to +infinity. (The novelty here: finding a suitable analytic function whose restriction to the real axis is related to the integrand simply and whose behavior in the upper half-plane can be estimated easily.) A Fourier transform. - The integral of 1/(1+x
^{2})^{2}from -infinity to +infinity. (The novelty here: the isolated singularity is a double pole, and we needed to think a bit more about how to compute its residue.) - The integral of cos(mx)/(1+x
^{2})^{2}from -infinity to +infinity. (Hardly any novelty here.) - The integral of sqrt(x)/(1+x
^{2})^{2}from 0 to +infinity. (Here the novelty is that the simple analytic function whose restriction to the real line is the integrand has a branch point at 0. We introduce an indented contour: in four parts, with large R > 0 and small epsilon > 0: an interval [epsilon,R], a semicircle of radius R centered at 0 in the upper half-plane, an interval [-R,-epsilon], and finally a semicircle of radius epsilon (oriented clockwise) in the upper half-plane centered at 0. Then some thought needs to be given. Estimation of the integrals on the semicircles show that they go to 0 as epsilon goes to 0 and R goes to infinity. This is not completely straightforward since different estimates of the bottom need to be made. Careful analysis of the integrals over the two intervals relates them. Here some work needs to be done to decide which value of square root is to be used. Then finally a residue computation is made, again using a correct determination of square root.)

**#25 April 25, 2001**

- More definite integrals computed via residues
- We began with a "workshop problem" where students actively engaged in small groups with computing a specific definite integral. A handout was used. Links to what was given out in various formals are available.
- The integral of log(x)/(1+x
^{2}) from 0 to infinity. Issues discussed included whether this integral converged (because of the slow grow of log at bot 0 at infinity, it does). After the numerical value (0) was gotten, an alternative explanation using a simple change of variables was given. - The integral of log(x)/(1+x
^{2})^{2}from 0 to infinity. - A list of a few interesting definite integrals which can be
computed using residues was given: the integral from -infinity to
+infinity of exp(-x
^{2}) ("Erf") and of (sin x)/x; the integral of cos(x^{2}) and sin(x^{2}z) from 0 to +infinity ("the Fresnel integrals"). - Student evaluations were handed out.

**#26 April 30, 2001**

- Two topics were covered very rapidly: roots of analytic functions, and conformal mapping.
- Suppose f is analytic and f(z
_{0})=0. Since f is equal to the sum of its Taylor series centered at z_{0}, either all the derivatives of f at z_{0}are 0 (and then f(z)=0 for all z in a connected open domain) or there is some N so that f^{(N)}(z_{0}) is not 0, and we can choose N to be the smallest such integer. Then by rearranging the Taylor series for f near z_{0}we can write f(z)=(z-z_{0})^{N}g(z) with g analytic near z_{0}and g(z_{0}) not equal to 0. N is called the multiplicity of the zero of f at z_{0}. So analytic functions can be factored like polynomials near where they are 0, and the places where non-zero analytic functions are 0 are isolated - nearby z's can't have f(z)=0. - Look at f'(z)/f(z). This has an isolated singularity at
z
_{0}. The residue of this singularity at z_{0}is N (this is gotten using the factorization above). Therefore, if C is a simple closed curve, 1/(2 Pi i) times the integral over C of f'/f is equal to the sum of the zeros of f inside C, counted according to their multiplicities. - The integral formula leads to useful results in finding roots of
analytic functions. Since integrals are quite stable under small
perturbation, it follows that roots of analytic functions are
similarly stable. This is a complex phenomenon, since roots of real
polynomials don't have similar stability. For example,
x
^{2}+/-(epsilon)^{2}has 2 roots in [-1,1] for - sign choice, and no roots for + choice. - We computed what happens to the tangent vector of curve under an analytic mapping. It turns out (using the Cauchy-Riemann equaitons) that (when f'(z) is not 0) the tangent vector gets multiplied by f'(z). So the vector gets stretched by a factor of |f'(z)| and rotated by arg f'(z). Therefore when two curves intersect in the domain, the image curves under f have both of their tangent vectors rotated by the same angle (arg f'(z)) and therefore the angle between the curves doesn't change. (Of course, this is true if f'(z) is not 0, but points where f'(z)=0 are isolated). The property of leaving angles unchanged is called conformal mapping. This property of analytic functions is equivalent to others previously cited: it can be used as a definition, also.
- The mapping S(z)=z
^{2}maps the open first quadrant to the upper half-plane conformally. - The mapping C(z)=-i((z-1)/(z+1)) maps 1 to 0, i to 1, and -i to
-1. We verified with direct algebraic computation that if C(z) is
real, then |z|
^{2}=1. So it maps much of the boundary of the unit circle to the real line. Since C(0)=i, C seems to map the interior of the unit circle to the upper half-plane. From the point of view of stereographic projection, this mapping just rotates the sphere in 3-space, taking one spherical cap to another. - The composition of C followed by S followed by C
^{-1}actually maps the half-disc inside the unit circle and above the x-axis to the first quadrant, then to the upper half-plane, then back to the unit disc. So half the unit disc is actually conformally equivalent to the whole unit disc. This is curious, and is actually a reflection of the Riemann Mapping Theorem, beyond the scope of this course, which states that any domain which is connected and somply connected and not all of the complex plane is conformally equivalent to the unit disc. - The material covered today will
*not*be tested on the final exam. - The instructor will be available via e-mail, and will have office
hours on Friday, May 4, from 10 AM to noon, in Hill 542. Graded
homework will be available for pickup then. More hours: Monday, May 7,
from 3 PM to 5 PM.
**The final exam is Monday, May 7, from 8 PM to 11 PM, in the usual classroom.**