There are powerful techniques for proving the existence of meromorphic and holomorphic functions in C with rigidly specified behavior (poles and zeros). The principal results are called the Mittag-Leffler Theorem and the Weierstrass Factorization Theorem. Fairly direct proofs of both of these are below. A few consequences are mentioned.
Mittag-Leffler Theorem Suppose W is a closed, discrete subset of C, and suppose that for each w in W, a polynomial P_{w} in C[z] with no constant term is selected. Then there is a meromorphic function f defined in C with poleset W such that the principal part of f at each w in W (this is the sum of the terms of negative degree in the Laurent series for f at w) is P_{w}(1/(z-w)).
Proof We first write W as a disjoint union of sets W_{n}. Here n is a non-negative integer. So W_{0} will be those w's in W with |w|<=1, while, more generally, if n is at least 1, W_{n} is the collection of w's in W with n<|w|<=n+1. Note that since W is discrete and closed, each of the W_{n} has at most finitely many elements. Of course, some of them may be empty. Then define Q_{n} (for n>=0) to be Q_{n}(z)=_{w in Wn}P_{w}(1/(z-w)) (this is a finite sum!). If W_{n} is empty, then Q_{n}(z) should be 0 These are the sum of the principle parts, the pieces of the singularities, in each of the annular regions between an integer and its successor (integer+1).
Now consider W_{n} for n>=1. All w's in this W_{n} must have |w|>n so that the sum defining Q_{n} is holomorphic in some disc of radius r, where r>n. This is because W_{n} is finite, and each w of W_{n} has modulus greater than n, and a minimum of a finite set is one of the set's elements (a specific |w|=r>n). Since Q_{n} is holomorphic in D_{r}(0), it is equal to a power series centered at 0 valid in all of the disc. The series will converge uniformly on compact subsets. Therefore there is a partial sum V_{n} of this power series (just a polynomial) so that if |z|<=n then |Q_{n}(z)-V_{n}(z)|<{1/2^{n}}.
Now we can write a "recipe" for f to prove the theorem! It will be defined by f(z)=Q_{0}(z)+_{n=1}^{}(Q_{n}(z)-V_{n}(z)).
Some verification is necessary.
First comment My comments in class on Friday/Wednesday were maybe a bit deceptive (no: wrong!) when I asserted that, oh my, the "bookkeeping" involved in such a proof would be forbidding. I was thinking more algorithmically. I wanted a more definite "formula" or "procedure" for the function f, that is, for selecting the polynomials which are involved with its description. Well, certainly if W is a sequence of w's whose growth (that is, rate of increase of |w|) is known, then we can use geometric series arguments effectively to get the polynomials we need to balance the principal parts. But the proof above avoids that consideration. There is no "effective formula" given, but the phrases "will converge uniformly" (applied to the power series for a holomorphic function) and "we can choose a partial sum ... so that" applied to the same series are the existential (?) version of effectively estimating the geometric series remainders.
Second comment Be very aware that the theorem does not assert that _{w in W}P_{w}(z) converges or that _{n=1}^{}Q_{n}(z) converges! That might be true (someone nice might have thrown in some 1/n! factors, after all). The convergence of the series defining f is a bit delicate. We have "tweaked" the terms with somewhat subtle adjustments so that the series for f does converge in the way we would like.
Third comment A version of Mittag-Leffler is true for any open subset of C, as I declared in class. But a proof needs some topological "dissection" of the set as a nice increasing union of compact sets. Some further knowledge of analysis is very useful also, since we need the analog of partial sums of Taylor series. A result called Runge's Theorem (also classical) provides such approximations. But I won't prove Runge's Theorem in this course.
Entire functions and their zeros
Suppose f is an entire function. Then f may or may not have zeros. For
example, exp has no zeros (any entire function with no zeros
can be written as e^{g(z)} where g is entire since C is simply
connected). Let's suppose that f does has zeros or roots. If f has a
finite number of roots, then f can be written as a product of a
polynomial and e^{g(z)}. So we know all about that.
So from here on let's suppose that f has infinitely many roots. Of course, one such function is the zero function, and f will be that function if the set of roots has an accumulation point. Let's further assume that f is not constant. Then the set of roots is countable with no accumulation point. But notice we need to worry also about multiplicity. We've seen that in complex analysis and algebra z^{3} naturally seems like it has 3 roots, and they all happen to be 0. Well, let us count the multiplicity of the roots, also. Then each complex number may occur as a root many times, but only finitely many times. So if {z_{n}} is the sequence of roots, then we know these Necessary Root Facts:
We will create a way to describe such an f precisely enough to insure that it exists. If we had only a finite number of roots, then we could easily write a product which had the desired roots. Well, take the sequence {z_{n}} and just write _{n=1}^{}(z-z_{n}) and this is very nice except what does it mean?
Infinite products
We might want to declare that an infinite product converges if the
sequence of partial products (analogous to partial sums) converges.
But consider the following examples:
Definition of convergent infinite productWith this definition, the individual terms of a convergent infinite product must tend to 1. It makes sense then to rewrite statements about infinite products from a form like _{n=1}^{}w_{n} to something like _{n=1}^{}(1+a_{n}) where we will investigate how fast a_{n}-->0.
The infinite product _{n=1}^{}w_{n} (here the w_{n}'s are all supposed to be complex numbers) is said to converge if there is an integer N so that for all n>=N, w_{n} is not 0, and if also lim_{k-->}_{n=N}^{k}w_{n} exists and is not 0.
The following theorem is true (and is one of the reasons for using the definition above!):
Proposition Suppose that {a_{n}} is a sequence of complex numbers, and that _{n=1}^{}|a_{n}| is finite. Then the infinite products _{n=1}^{}(1+|a_{n}|) and _{n=1}^{}(1+a_{n}) both converge.
Proof If _{n=1}^{}|a_{n}| converges, then eventually the terms
will be less than 1/2, so let me assume that they are all less than
1/2. But look at log near 1. We know
If |z|<1/2, then log(1+z)=z+Err(z), where
|Err(z)|<|z|^{2}/2 (look at the Taylor series and
overestimate the tail).
Then log(_{n=1}^{}(1+a_{n}))=_{n=1}^{}a_{n}+ERR where ERR is
the sum of the possible errors. Remember that we are looking at a
series where the terms all have modulus less than 1/2 (possibly a tail
of the original series) so that squaring things makes them even
smaller. Therefore the series of errors which make up ERR must
converge. So since log of the product converges, the product must
also.
Comment Of course, with the principal hypothesis of this proposition, the product's convergence is very much like the convergence of an absolutely convergent series. So, in fact, |_{n=1}^{}(1+a_{n})|<=_{n=1}^{}(1+|a_{n}|) is true. Also the analogy of the fourth problem on the midterm is true: for infinite products which satisfy this hypothesis, we can rearrange the factors and the result will always converge, and the value of the product will be the same. This stability is very useful. All of the infinite products considered here will "converge absolutely" in the sense of this result.
Proposition Suppose that {g_{n}} is a sequence of continuous complex-valued functions, and that _{n=1}^{}|g_{n}(z)| converges uniformly in some subset S of C. Then _{n=1}^{}(1+|g_{n}(z)|) and _{n=1}^{}(1+g_{n}(z)) both converge and the values are continuous functions on S.
We can say a bit more for holomorphic functions.
Theorem Suppose U is an open subset of C, and {g_{n}} is a sequence of holomorphic functions in U, none of which are constant in any component of U. Also suppose that _{n=1}^{}|g_{n}(z)| converges uniformly on compact subsets of U. Then F(z)=_{n=1}^{}(1+g_{n}(z)) converges for all z in U and is holomorphic in U. F is not identically 0, and, if F(w)=0, then the order of the zero of F at w is equal to the sum of the orders of the zeros of 1+g_{n} at w.
I won't prove this, but let's talk about it. Suppose z is in U. Since the product for F(z) converges, then there is N so that _{j=N}^{}(1+g_{n}(z)) converges and is non-zero. Thus if F(z)=0, only finitely many of the (1+g_{n})'s can be 0 at z, and (since the functions are not constant) the orders of the zeros must be finite. So the sum of the orders of the zeros of the (1+g_{n})'s must be finite. The remainder of the theorem follows from the previous result and the fact that locally uniformly convergent sequences of holomorphic functions have holomorphic limits.
Now back to creating f
So we have a sequence {z_{n}} satisfying the Necessary Root
Facts. We initially tried _{n=1}^{}(z-z_{n}) but now we rewrite the infinite
product as _{n=1}^{}(1-[z/z_{n}]). If some of the z_{n}'s
are 0, I will just multiply the result by
z^{correct power} so I will assume all of the
z_{n}'s to be considered aren't 0. I don't think we will
generally be lucky enought to have _{n=1}^{}|z/z_{n}| converge (z_{n} could just be
n, for example). So we will need convergence producing factors. We
might as well take them to be e^{THING} since we don't want to
manufacture more zeros of the resulting function.
Look now at _{n=1}^{}(1-[z/z_{n}])e^{hn(z)}. Let's consider the log of this product (given z in C, everything I write is true for sufficiently large n uniformly in a neighborhood of z since lim_{n-->}|z_{n}|=). So we have a series whose n^{th} term is log(1-[z/z_{n}])+h_{n}(z). If I can get h_{n}'s so that this is O(1/2^{n}) as n--> for all z's in some disc around 0, then I would be done. Can we do this?
Fix some positive integer n, and consider the z's which satisfy
|z|<|z_{n}|/2. Then we know that log(1-[z/z_{n}])
can be written using a convergent power series. Remember that
log(1+w)=_{j=1}^{}(-1)^{j+1}/w^{j} for |w|<1.
Now if we know that |w|<1/2, we can easily (yup,
introductory calculus, just think about the infinite and finite
geometric series with error term and then integrate) write:
log(1+w)=_{j=1}^{m}{(-1)^{j+1}/j}w^{j}+Error_{m}(w)
where |Error_{m}(w)|<2|w|^{m+1} if |w|<1/2.
Now "plug in" –z/z_{n} for w in the equation. We get:
log(1–z/z_{n})=_{j=1}^{m}{(-1)^{j+1}/j}(–z/z_{n})^{j}+Error_{m}(–z/z_{n})
where
|Error_{m}(–z/z_{n})|<2|–z/z_{n}|^{m+1}
if |–z/z_{n}|<1/2.
There's a whole bunch of things that come together here. Some
simplicities occur. For example, all of the minus signs except one
cancel. And we are considering only the z's where
|z/z_{n}|<1/2 so we can "forget" the hypothesis. And the
error estimate, again because |z/z_{n}|<1/2, becomes
easier. So we have:
log(1–z/z_{n})=–_{j=1}^{m}(z/z_{n})^{j}/j+Error_{m}(z/z_{n})
where |Error_{m}(z/z_{n})|<1/2^{m} for our
z's.
Hey: I now know how to select the correct h_{n}! Take m
so that |z/z_{n}|^{m}<1/2^{n} (n and a
non-zero z_{n} are given, and we are free to take such an m
which I will call m_{n}). And take
h_{mn}(z)=_{j=1}^{mn}(z/z_{n})^{j}/j (we
discarded the minus sign).
Theorem Given any sequence {z_{n}} satisfying the Necessary Root Facts, there is an entire function f whose zero set, counting multiplicities, is that sequence. Addtionally, f can be written as a specific infinite product (as described above, with, as noted, a possible z^{power} if needed).
Wow! This is some theorem. The crazy h_{n}'s described here are involved in the definition of the Weierstrass elementary factors and the result above, suitably formulated, is called the Weierstrass Factorization Theorem. There's also a similar result for any open subset of C. But let me show you the fantastic results which follow from this theorem.
First, the result I mentioned a while ago, proved for C:
Theorem (Quotient field!) Any meromorphic function in C is a quotient of holomorphic functions (entire functions) in C.
Proof Suppose M is a meromorphic function, and suppose that P is its pole set, counted with multiplicities. So if w is a pole of order N, then w appears N times in the sequence {z_{n}}. Now create a holomorphic function g whose zero set is P. If w is an element of the pole set of M, then we know about the local descriptions of M and g near w: we know that g(z)=(z-w)^{N}h_{1}(z) and M(z)=(z-w)^{-N}h_{2}(z) near w, where h_{1} and h_{2} are holomorphic near w and their values at w are non-zero (they are local units). Thus locally, (g·M)(z)=h_{1}(z)h_{2}(z) is a unit near w: a non-zero holomorphic function (so, more precisely, the product g·M has a removable singularity at w, which we will think of as "removed"). Notice that away from the pole set of M, the product g·M is also holomorphic. Therefore this product defines a function f which is holomorphic in U. And M=f/g, as desired. (!)
There's an amazing result on interpolation which uses both the Weierstrass Factorization Theorem (above) and the Mittag-Leffler Theorem.
Theorem (Interpolation!) Suppose {z_{n}} is any closed discrete sequence in C, and {w_{n}} is any sequence of complex numbers. Then there is an entire function f so that for all n, f(z_{n}(z)=w_{n}.
Proof Use the Weierstrass Factorization Theorem to create a function whose zero set is {z_{n}}. Just as before, the local picture of this function near a point z_{n} in its zero set is (z-z_{n})h(z), and h(z_{n}) is non-zero. Then use the Mittag-Leffler Theorem to create a meromorphic function with simple poles at each z_{n}, and with principal part at z_{n} equal to [w_{n}/h(z_{n})][1/(z-z_{n}). The product of these two functions has the desired properties.
Actually you can even do much much better: you can specify arbitraily a finite initial "chunk" of the Taylor series of a holomorphic function as any closed discrete sequence of points in C (just use Mittag-Leffler with higher order singularities and Weierstrass with higher order zeros). Compare this result with the problem in the 0^{th} homework assignment, which asserted that a power series can grow arbitrarily fast on the integers. So we can now prove a much stronger result.
The classical literature is full of very precise descriptions of factorizations for specific functions, such as sin(z)=z_{n=1}^{}[1-(z^{2}/n^{2})] which Euler believed. This is a convergent infinite product since _{n=1}^{}z^{2}/n^{2} converges absolutely and locally uniformly. Since the infinite product has the same zeros as sine, the quotient is a non-vanishing entire function. A bit more work must be done to verify that the quotient is actually 1.
Maintained by greenfie@math.rutgers.edu and last modified 12/1/2007.