You are on page 1of 67

Taylor Polynomials.

4
4.1

Motivation

Our aim is develop a method for approximating a (differentiable) function f (x) using polynomials.
We know from High School that we can draw a tangent line
to a curve at a point, and that near that point, the tangent line
is a good approximation to the function.

Can we extend this idea of a tangent line to a polynomial of


1

higher degree?
4.2

Taylor Polynomials

The linearization of a differentiable function f at a point a is


the polynomial of degree one
P1(x) := f (a) + f 0(a)(x a).
This is simply the equation of the tangent line to f (x) at x = a.
Near x = a, this gives a reasonable approximation to the function, indeed at x = a it is exactly equal to the function, that is,
P1(a) = f (a).
In order to generalise this, we write
Pn(x) = b0 + b1(x a) + b2(x a)2 + .... + bn(x a)n
2

and ask that Pn(a) = f (a), Pn0 (a) = f 0(a), Pn00(a) = f 00(a), ... and
so on.
This gives,

Hence, we have
Definition
Suppose f is a function which can be differentiated n times at
the point x = a.
3

The Taylor Polynomial of order n for f at x = a is given by


00(a)
000(a)
f
f
Pn(x) = f (a) + f 0(a)(x a) +
(x a)2 +
(x a)3 + ...
2!
3!
f (n)(a)
(x a)n.
+
n!
(Note: We use the term order rather than degree here, since
f (n)(a) might be equal to 0 and so the polynomial may have
order n but not degree n.)

Example: Find the Taylor polynomials for f (x) = ex up to order


3 about x = 0.

Notice in the graphs that near x = 0, the polynomials give


a good a approximation to ex, but once we move away from
x = 0 they dont.
Example: Compute the Taylor polynomial of order 2n for the
function f (x) = cos x about x = 0.

Example: Compute the Taylor polynomial of order 2n + 1 for


the function f (x) = sin x about x = 2 .

4.3

The Error Term.

We have seen graphically that near x = a the Taylor polynomial


is close to the function. We will now begin to investigate more
precisely what this means. When we approximate, there is an
error involved, which we can try to measure. Hence, we try to
write
f (x) = Pn(x) + Rn+1(x),
where Rn+1(x) is the error term when we approximate f (x) by
Pn(x).
Notice that the error will depend on both the order of the Taylor polynomial - the more terms, the better the approximation and also on the value of x we are approximating at - the closer
x is to a, the better the approximation.
6

The following remarkable theorem, gives a relatively simple


formula for the remainder. The details of the how this theorem
is proven are contained in the detailed lecture notes.

Theorem 4.1. (Taylors Theorem.)


Suppose that f has n + 1 continuous derivatives on some open interval I which contains a. Then for each x I,
00(a)
(n)(a)
f
f
f (x) = f (a)+f 0(a)(xa)+
(xa)2+...+
(xa)n+Rn+1(x),
2!
n!
f (n+1)(c)
where Rn+1(x) = (n+1)! (x a)n+1 and c is some real number

between a and x.
In essence this says that we can approximate a (smooth) function by its Taylor polynomial and with an error term, Rn+1(x)
7

f (n+1)(c)
(x a)n+1.
given by
(n + 1)!
There are other formulae for the remainder term. The one
quoted above is sometimes referred to as Lagranges form of the
remainder.
The number c may appear a bit mysterious. The proof of
this result uses the Mean Value Theorem (indeed this Theorem
can be seen as a generalisation of the MVT, as you can see by
putting n = 1.) In a given problem is it very hard to determine
exactly what the c is - we dont in general know exactly what
c is but we do know where it is, and this enables us to get an
idea of how big the error can be in a given approximation.
Example: Estimate the error of approximating f (x) = ex with
p2(x) = 1 + x + x2/2 over the interval x (0.5, 0.5).
8

Example: Calculate e with an error of less than 105.

4.4

Classifying Stationary Points.

At school you learnt the 2ndderivative test for classifying stationary points of functions. We now can prove a second 2nd
derivative test by employing Taylors theorem!
Theorem 4.2. Let f have continuous 1st and 2ndorder derivatives
on an interval I and suppose that f 0(a) = 0 for some a I.
(i) If f 00 0 on I then f has a local max at a.
(ii) If f 00 0 on I then f has a local min at a.
Proof. We prove this by approximating f by its third order polynomial with remainder term:
00(c)
f
f (x) = f (a) + f 0(a)(x a) +
(x a)2, for all x I;
2
with c between a and x. Now since f 0(a) = 0, we have
f 00(c)
f (x) = f (a) +
(x a)2
2
10

This will have maximum or minimum according as f 00(c) 0


or 0.

Example: y = x4 at x = 0

The following more general theorem is also useful for more


max/min classification.
Theorem 4.3. Suppose f is n times differentiable at a with
0 = f 0(a) = f 00(a) = = f (k1)(a).
In addition, if f (k)(a) 6= 0, with k n then:
(i) If k is even and f (k)(a) > 0 then f has a local min at a.
(ii) If k is even and f (k)(a) < 0 then f has a local max at a
(iii) If k is odd then f has an inflection point at a.
11

Example: f (x) = x7 10x6 + 40x5 80x4 + 80x3 32x2 10 has


a stationary point at x = 2. Classify it.

12

4.5

MAPLE.
To obtain the Taylor polynomial of ex of order 4 we use the
following command:
T1:=taylor(exp(x), x = 0, 5);
1 2 1 3 1 4
1 + x + x + x + x + O(x5)
2
6
24
We can remove the O(x5) and convert into a proper
polynomial by using:
convert(T1,polynom);
1 2 1 3 1 4
1+x+ x + x + x .
2
6
24

13

Sequences.

We have now seen that an nth order polynomial can approximate a (smooth) function and that the larger the value of n,
the better the approximation, provided we keep to the same x
value.
These ideas raise interesting and difficult questions when we
ask what happens as n becomes large. To deal with these, we
will step back a little and look firstly at sequences of real numbers and then infinite series of real numbers and finally return
to Taylor, Maclaurin and Power Series.

14

5.1

Sequences of Real Numbers.

Definition: A sequence is simply a function whose domain


is (a subset of ) the natural numbers with co-domain the real
numbers.
n
5
Ex: an = n!

Given a sequence of numbers, one of the questions we wish


to answer, is, do the terms get closer to some finite number as
we go further and further along the sequence? That is, does
15

lim an exist?

For example, the terms of the sequence an = n grows without


bound and so we say that this sequence diverges.
1
On the other hand, the terms in the sequence an = become
n
smaller and smaller and so we say that this sequence converges
to 0.
1
n+1
Similarly, the the terms in the sequence an = (1)
, al-

n
though oscillating, have magnitude that becomes smaller and
smaller and so we say that this sequence converges to 0 also.

Finally, the terms in the sequence an = sin n oscillate, but their


16

magnitude does not approach any limit as n increases. We say


that this sequence is boundedly divergent.
Given a sequence of numbers, one of the questions we wish
to answer, is, do the terms get closer to some finite number as
we go further and further along the sequence? That is, does
lim an exist?
n

17

5.2

Geometric Interpretation of Limits.

n
Example: Consider the sequence an =
n+1

If we draw little band around the line through 1, we see that


eventually the crosses move into the band and stay there forever. That is, no matter how small we make the band, there is
18

an integer N such that if we take any term further along the


sequence, then it is in the band, i.e. its distance from 1 is less
than the width of the band. The value of N will depend on the
width of the band, the smaller the width, the larger N will have
to be.

19

We can formalise this simple idea by saying that


lim an = L

if and only if: Given some positive real number , we can find
an integer N , such that |an L| < , whenever n > N .
In words: Provided n is large enough, (i.e. n > N ), the terms
of the sequence are inside the epsilon band around L, i.e. |an
L| < .

Note that we are not necessarily looking for the smallest value
of N , and in practise this might be very hard to find, we only
want a value of N that works.

20

1
Ex. Prove formally that lim
=0
n n

(1)n
Ex.Prove formally that lim
=0
2
n (n + 1)

21

3n2 1
Ex. Prove formally that lim 2
=3
n n + 2

A similar geometric interpretation exists for sequences that diverge to .

22

5.3

Rules for Limits.

Although the above definition of limit essentially captures the


notion, it is not very practical to use. Instead, we want some
basic rules which help us to build up more complicated limits
from simpler ones.
The following rules are often refered to as the algebra of limits.
Suppose an L and bn l as n , where L and ` are
finite real numbers, then
Rule 1. an bn L `.
Rule 2. anbn L`.
23

Rule 3. ab n L` , provided bn 6= 0 for any n and ` 6= 0.


n
Rule 4. Suppose f : R R is a function and suppose that
lim f (x) exists.
x

Let an = f (n), where n is an integer, than


lim an = lim f (x).

Rule 5. Suppose f is a continuous function, and that an belongs


to the domain of f for each n, then f (an) f (a).
Examples:
24

4n2 3n + 2
1. lim
n 2n2 + 6n + 1

1
2. lim sin( )
n
2 n

1
3. lim n sin =
n
n

The following limits are standard and will be used in later work.
25

ln n
4. lim = for > 0.
n n
5. lim

1
nn

6. lim

1
xn

7. lim xn =
n

for x > 0.
for |x| < 1.

Observe that Rule 4 enables us to use LHopitals


rule, for example,
n

1
lim 1 +
n
n

26

5.3.1

The Pinching Theorem for Sequences.

You have already seen the Pinching Theorem for functions. The
story is very similar for sequences.
Theorem 5.1. Suppose that {an}, {bn}, {cn} are sequences such that
for all sufficiently large n we have
an bn cn.
Then if, lim an = lim cn = L, we have lim bn exists and equals
n
n
n
L.
n!
Example: Discuss the limit an = n .
n

27

Infinite Series.

A series is simply the sum of the terms of a sequence. For an


infinite series we define

X
k=j

ak = lim

N
X

ak ,

k=j

provided this limit exists. If it exists, then we say that the series

X
ak converges. Otherwise, we say that the series diverges.
k=j

We can think of this sum then, as the limit of a sequence of


n
X
partial sums, sn =
ak .
k=j

Does the process make sense?


28

X
X
1
1
Example: Discuss the series
and
n
2
n
n=1

29

n=1

6.1

The nth term test for divergence.

Note the following important result:


Theorem 6.1. Consider the infinite series

ak . If ak 6 0, as k

k=j

, then the series diverges.


That is, a necessary condition for

ak to converge is that

k=j

ak 0 as k .
Note also that the converse is NOT necessarily true. We saw

X
1
in the above example that
does NOT converge, despite
n
30

the fact that n1 does go to 0 as n goes to infinity.


Ex: The series

X
n=1

6.2

n
2n + 3

Geometric Series.

At this stage there is only one kind of infinite series you have
met and that is the infinite geometric series,

31

6.3

Telescoping Series

Another type of infinite series which we can deal with is the


so-called telescoping series.
Ex. Find the sum

X
n=1

1
.
2
n +n

32

In general, series are very difficult to sum, and so it is not practical to try to find a formula for the partial sums in closed form.

X
1
2
For example, although it is true that
=
this is quite
2
6
n
n=1
difficult to show.
Instead, we are interested in the question, Does a given series converge?, and NOT What does it converge to? We will
develop some tests to examine the convergence of series.
I will use the notation

an for

an to represent the infi-

n=j

nite series starting at some finite value j.


33

6.4

Integral Test

We can sometimes decide the convergence of an infinite series,


by looking at a corresponding improper integral. Suppose f (x)
is a positive decreasing function on [1, ), and an = f (n) whenever n is an integer.
By comparing areas we see that the total area of the underapproximation is
Z

X
an
f (x) dx
n=2

and the total area of the over-approximation is


Z

X
an
f (x) dx.
n=1

Thus, we see that


34

Z
if

f (x) dx converges then

an converges

Z
if

f (x) dx diverges then

an diverges.

Ex: Check that

X
1
diverges and that
converges.
2
n
n

X
1
n=1

n=1

35

Ex. Show that

X
n=2

1
diverges.
n log n

36

6.5

pseries:

Consider the series

X
1
(p) =
.
p
n
n=1

Using the integral test we see that this series converges if p > 1
and diverges if p 1.
Proof:

37

The integral test uses an area to bound a series. In some cases


one can bound a series by another series and hence make sensible conclusions regarding convergence.

6.6

Comparison Test:

Theorem 6.2. Suppose 0 an bn. That is, suppose the sequence


of numbers an is squeezed between 0 and bn. Then
if
and
if

bn converges, so does

an diverges, so does
38

an

bn .

When making comparisons, we generally compare with the p


series mentioned above.
You will not be expected in this course to use this test. The
following test will be particularly useful when we return to the
original problem of taking more and more terms in a Taylor series.

6.7

The Ratio Test:

This test was developed by DAlembert, a French Mathematician who lived during the seventeen hundreds.
Theorem 6.3. Suppose we have a sequence an of positive terms, and
39

suppose


an+1

L as n .
an
Then

an converges if L < 1, diverges if L > 1.

If L = 1 then the test fails.


Proof. When L > 1 we do not have an 0 as n so the
an+1
series diverges. For L < 1, if
L, for all sufficiently
an

X
large n, so we can show by induction that an Lna0 and
Ln
n

converges, since it is a GP with L < 1 and so by the comparison


test, the original series converges. This can be generalised to
give a full proof and is covered in the notes.

40

We generally use the ratio test when exponentials and factorials are involved.
Ex.

2
X
k
k=1

2k

X
1
Ex.
k!
k=0

41

Ex.

X
k!
k=1

6.8

2k

Alternating Series:

We have seen that the series


series

X
1
k=1

diverges to infinity, i.e. the

1 1
1 + + + ....
2 3
42

is unbounded. Consider now, what happens if we look at the


series
1 1 1
1 + + ....
2 3 4
This series, does, in fact, converge (surprisingly to log 2 (!)).
Such a series is called an alternating series. More specifically,
if an is a sequence of positive numbers then

X
(1)k ak
is called an alternating series.
To examine the convergence of an alternating series, we begin
by considering the corresponding non-alternating series

X
ak
43

which we analyse using the techniques described above.


If this series converges then the alternating series, being bounded
by it, will also converge.
We say in this case that the alternating series converges absolutely. The meaning of this term will be explained later.
As we saw in the example above, it is possible for the alternating series to converge even though the corresponding nonalternating series does not. If the non-alternating series does
not converge we use the following test, called Leibniz Test.

44

6.9

Leibniz Test:

The following test was developed by Leibniz, a German contemporary of Newton, who discovered the Calculus independently at about the same time.
Theorem 6.4. Suppose that an is a sequence of positive real numbers,
and
i) a1 > a2 > a3 > ...,
i.e. an > an+1 for all n, and
ii) lim an = 0
n
then the alternating series

(1)nan converges (conditionally).

Note that condition (i) says that the terms are monotonically
decreasing. Note also that this ONLY works for an alternating
45

series.
Ex.

X
(1)k
k=1

k2

X
(1)k
Ex.
k log k
k=2

46

6.10

Conditionally Convergent Series:

We will now look at a remarkable difference between absolutely and conditionally convergent series.
Returning to our series:

47

Thus, by playing around with the order of the terms we get


half the sum.
Such bizarre behaviour is typical of conditionally convergent
series. In fact, any conditionally convergent series can be made
to sum to anything you please, and indeed can even be made to
diverge. By conditionally convergent we mean that the series
will converge if we add the terms up in the standard order. To
briefly illustrate why this happens, suppose we wish to make
the above series add up to 10. We take enough positive terms
of the series to get just beyond 10, to do this MAPLE tells me
we need about 108 terms. Now take some of the negative terms,
in fact 12 will do. This brings the sum back to about 9.69209.
Now add some more positive terms to bring to the sum back
over 10 and then use some negative terms to bring it back to
48

below 10 and so on. It is a little harder to show that the partial sums do tend to 10 by this process, but nonetheless this can
be done in a very precise way and furthermore the idea can
can generalised to show that any conditionally convergent alternating series can be made to sum to anything.

49

6.11

MAPLE Notes:
The following commands are relevant to the material of
this section.
>sum(f,k=m..n);
is used to compute the sum of f (k) as k goes from m to
n.
> sum(k2,k=1..4);
30
> sum(k2,k=1..n);
1
1
1
1
3
2
(n + 1) (n + 1) + n +
3
2
6
6
> sum(1/k2, k=1..infinity);
2
6
50

Taylor Series.

We return now to the problem we started with. That is, given


a Taylor polynomial for a (smooth) function f , can we make
sense of letting the number of terms go to infinity?
Assuming, for the moment that the series converges, we make
the following definition:

51

Definition: Suppose f has derivatives of all orders, then the


series
00(a)
(k)(a)
f
f
f (a) + f 0(a)(x a) +
(x a)2 + ... +
(x a)k + ...
2
k!
(k)
X
f (a)
=
(x a)k
k!
k=0

is called the Taylor Series for f about x = a. (This is also


known as the Taylor Expansion of f about x = a.)
In the case when a = 0, the simpler
f (0) + f 0(0)x +

f 00(0)
2

x2 + ... +

f (k)(0)
k!

xk + ... =

(k)
X
f (0)
k=0

k!

xk

is called the Maclaurin Series for f (x). (Colin Maclaurin was


professor of Mathematics at Edinburgh about the time of New52

ton.)
Example. Find the Maclaurin series for sin x.

Example. Find the Taylor series for cos x about x = 2 .

53

Example. The function f (x) = log x does not have a Maclaurin series. Find its Taylor series about x = 1

(Comment: An ancient formula for computing


the square root

x . This
(approximately) for a given number is
a2 + x a + 2a
is simply the Maclaurin series for f (x) = a2 + x.)

54

7.1

Convergence of Taylor Series

Two important questions arise. Firstly, exactly where does the


Taylor series of a function converge? And secondly, if it does,
can we be sure that the series converges back to the function
we started with?
A proper answer to the first of these questions requires a knowledge of Complex Analysis and is beyond the scope of the course.
To answer the second question, we take the Taylor polynomial,
Pn(x) and as before, write
Rn(x) = f (x) Pn(x),
for the remainder. We can then state:

55

Theorem 7.1. Suppose that f has derivatives of all orders at a and x


lies in the domain of f . If, for each fixed x, lim Rn+1(x) = 0 then f
n
is represented by its Taylor series, i.e.
f (x) =

(k)
X
f (a)(x a)k

k!

k=0

56

Example: Prove that the Taylor series for ex converges to ex


everywhere.

7.2

Common Maclaurin Series

Here are some examples of how these series can be used.

57

Example: Use Maclaurin series to find:


2
x
cos x 1 + 2
lim
.
4
x0
x

Example: Use Maclaurin series to find an approximiation to:


Z 1
sin(x2) dx.
0

58

7.3

Power Series

The Maclaurin and Taylor series we have been studying above,


have the following general form:

anxn

or

n=1

an(x a)n.

n=1

Series of this form, which are basically polynomials of infinite


degree, are called Power Series. They play a very important
role in the applications of Mathematics, since there is a method
of solving certain types of differential equations, (which cannot
be solved by other methods), using power series.
We can analyse these, using the ratio test, to find where they
converge and this gives us a partial answer to the question of
where Taylor Series converge.
59

Example: Apply the ratio test to

X
nxn
n=1

of convergence.

to find the interval


n
2

Notice that since the ratio test gives NO information when the
limit is 1, this method will ensure convergence on an open interval but give no information of what is happening at the endpoints. To deal with these, further analysis is required. This
60

will not be covered in this course.


Ex. Find the interval of convergence for the power series

X
(x 3)k
k=0

3k + 2

61

7.3.1

Radius of Convergence

We see that power series converge on intervals. Half the length


of such an interval is called the radius of convergence. (The
name comes from the fact that these series actually converge
inside discs in the complex plane, where the interval of convergence in the real plane is simply a diameter.)
7.4

Manipulation of Power Series

Given two power series in powers of (x a), f (x) =

X
k=0

a)k and
62

ak (x

g(x) =

bk (x a)k , in which both converge for x in some

k=0

common interval of convergence, then we can add, subtract or


multiply these two series and the resulting series will also converge in that interval.
More importantly,
If f (x) =

ak (x a)k for |x a| < R then

k=0

i) f is continuous and differentiable for |x a| < R and


f 0(x) =

kak (x a)k1;

k=0
63

ii) f is integrable on |x a| < R and a primitive for f is

X
ak
F (x) =
(x a)k+1.
k+1
k=0

Example: Show how to get the Maclaurin series for cos x from
that for sin x.

64

Example. Consider the power series

X
k=0

65

xk .

Example. Find the power series for xex.

Example. Write down the first 4 terms of the power series for
ex and state where the series is valid.
1x

66

Note that one can construct infinite series (NOT power series)
in which the above results are NOT true.
Example. Here is an example of a function defined in terms
of a series which is continuous everywhere but differentiable

X
1 n
nowhere. f (x) =
( ) sin(4nx).
2
n=1

67

You might also like