You are on page 1of 7

Welcome to Calculus.

I'm professor Greist.


We're about to begin Lecture 56 on
Approximation and Error.
We've seen the big picture of how Taylor
series and Polynomial Approximations
to functions, work.
In this lesson, we're going to change
focus, and detail exactly what happens
when we perform Approximations.
By now, you know that the sum of 1 over n
squared converges,
since it is a p series, with p equal to
two, but
to what does it converge?
We've claimed in the past that this
converges to pi squared over 6, that is a
deep result.
We can't get that easily so, let's
approximate the true value of pi
square over 6 is, in decimal form 1.6449,
etcetera.
How many terms would we have to sum up in
this series?
To get within a certain amount of that
true value.
Well, let's fire up the computer and see
what happens
if we add up, let's say, the first ten
terms.
Then we get an answer that is, well, it's
within a neighborhood.
It's not exactly really close.
So let's add up the first 20 terms and see
how close
we get.
Now we're doing a fair bit better.
If we take a little bit more
time and effort, compute the first 100
terms, then it seems as though we're
definitely within 1% of the true answer.
And
if we added up the first 1000 terms, well
now we're getting something, that is
really fairly close to pi squared over 6.
But how close?
Well, in general, you are never going to
be able to get to the truth.
The true answer involves a limit.
And that just takes a lot of work.
In general, what you can do is come up
with an approximation.
Let's say by summing up terms up to and
including,
a sub capital n.
Now what's left over, is the error that
will denote E sub capital N.
You're not going to be able to compute
that error exactly, since then you would

know the truth, but you can control or


bound that error.
Let's see how that works, in the context
of an alternating series.
Let's consider a series that satisfies the
criteria of the alternating
series test.
That is, the sum of negative 1 to the n,
a sub n, where the a sub n's are positive,
decreasing and limiting to 0.
Then, remember how this convergence
happens as you take the partial sums.
You're always jumping over the true
answer to the right and then back to the
left, because of the alternating nature.
Then in this case, it's easy to get an
upper.
Bound on the error, U sub n in absolute
value.
It is precisely a sub n plus 1, the next
term in this series.
Because you're always overshooting.
When you have an alternating series, this
result is simple
and useful.
Let's consider the approximation of 1 over
square root of e with the goal of getting
within 1 1000th.
Well, if we use our familiar expansion for
e to the
x where x equals negative one half, then
we see this is
really an alternating series.
With the a sub n term being equal to
1 over n factorial times 2 to the n.
Now for goal, is to add up a finite number
of these terms and get the error less than
1 1000th.
Then, the alternating series bound says
that we need
to find a capital N, so that a sub capital
N plus 1 is less than 1
1000th.
Well, what is a sub capital N plus 1, it
is capital N plus one quantity factorial
times 2 to the capital N plus 1.
Getting that, less than 1 1000th is not so
hard.
That's going to work whenever N plus
1 factorial times 2 to the N plus 1 is
greater than 1000.
That's true, whenever N is bigger than or
equal to 4.
So it doesn't take many terms to
approximate 1 over square root of E.
But let's consider what it would take to
approximate log of 2,
using the alternating harmonic series.
In this case, a sub n is 1 over n.
In order to get the error less than 1

1000, what do we need?


Well, again by the alternating series
bound, we need a
sub capital N plus 1, less than one 1
1000th.
That's the same thing as saying N is
greater than or equal to 1000.
And that is a lot of terms to get within
the same error amount as we used for an
exponential.
What happens if you don't have an
alternating series?
Well, you need a different error bound.
There is one, associated to the integral
test.
Let's say, that you have continued your
series a sub n to a function
a of x, and have shown convergence by
means of integrating this
function.
Then, one can see that the tail,
the E sub N term, has a natural lower
bound
in terms of the integral of a of x.
Specifically if one
integrates as x goes from n plus one to
infinity.
Then, that is a strict lower bound for e
sub n.
Now that's not optimal.
We tend to want an upper bound for the
error.
But note what we can do if we slide
everything over by one unit.
Then, the appropriate upper bound for the
error is the
integral of a of x dx as x goes from
N to infinity.
This is remarkable, in that we get a bound
in both directions.
With this in mind, let's see what it would
take to get close to pi
squared over 6 when we sum up terms, and
form one over n squared.
Now, we know the value of pi squared over
six,
and let's say that we want to get within 1
1000th.
Well, we know that this p series converges
by the integral test.
Using a continuous function, a of x equals
1 over x squared.
By the integral test bound, e sub capital
N, is less than the
integral from capital N to infinity of
this a sub x.
We've done the integral of 1 over x
squared.
Enough times, so that you'll believe me
when I

say that this integral comes to one 1


capital n.
Now if we want that to be less than 1
1000th,
that's really saying that n has to be
larger than a thousand.
And if you'll recall when we did some of
our computations,
that is exactly what we saw, when we
summed up the first
1000 terms.
The integral test gives very precise
bounds.
If you don't have an integral test, and
you
don't have an alternating series, what can
you do?
Well, there's one last error bound that
involves only Taylor expansion.
But we're going to pay for that generality
in terms of complexity for
the following result is deep and difficult
to grasp.
For that reason, we'll keep it simple by
looking at
what happens at f of x, for x close to 0.
Assume that f is smooth, then Taylor
expand f, about x equals
zero, keeping only terms up to and
including order n.
Now, of course, f is not equal to this
Taylor polynomial, it's just an
approximation, so there's some error.
But here, the error term, e sub n, is a
function of x, and not a constant.
So, what can we say about that error
function?
Well, the first thing that we can say is
that E sub N of x is in big O,
of x to the N plus 1.
That's not too surprising.
Everything else is higher order terms.
On the other hand, this is kind of a weak
result in that,
in big O, you're only finding out what
happens up to a constant and
in the limit as x goes to 0.
What we'd really like is
a more explicit bound that we can use to
get numerical results.
Well, there is a strong form of this
theorem that
says that the error is bounded in absolute
value I.
Some constant C times x to the N plus 1
over n plus 1 factorial.
Where this constant C, serves as an upper
bound
to the N plus first derivative of f, at
all values of t between 0 and x,

inclusive.
This is a much stronger version of the
error bound, and tells you that it's
really
the n plus first term in the Taylor
expansion, that is giving you control over
the error.
In fact, if you want to
get really strong, we can replace the
constant big C
by exactly the n plus first derivative
of f at some t, that is between 0
and x.
And this is a very remarkable result in
that you're not bounding the error, you're
saying exactly what the error equals as
a function of x.
What I'm not telling you is
what t, you have to choose in order to
evaluate that impulse first derivative.
Now, I'll let you work out what this would
be if you replace 0 with a.
Let's see how this bound works in an
example.
Let's approximate the square root of e
within 10 to the
negative 10.
Using the familiar expansion for e to the
x and evaluating at x equals one half.
Then what do we get?
We have some E sub N, where by the Taylor
theorem, E sub N is less than some
constant C over n plus one factorial times
x to the n plus one.
In this
case x equals one half, and C is some
constant
that bounds the n plus first derivative of
e to the x
for all values of x between 0 and one
half.
Now fortunately, derivatives of e to the x
are easy to compute.
That's just e to the x.
So, what is a good upper
bound for e to the x?
Well, since e to the x is increasing, then
a good upper bound would be the right hand
end point, e to the one half.
Well that number is maybe not so easy to
work with, so let's just say two, because
I know that two is a reasonable upper
bound for e to the one half.
Therefore, we get that e sub n is less
than
one over n plus 1 factorial times 2 to the
n.
That's maybe not the best bound we could
come
up with, but it will get the job done.

Because if we tally up n plus 1 factorial


times 2 to the N, for various values of N,
we see, without too much effort, that
having N bigger than or equal to 10
will work.
Well, that went so well.
Let's do it again.
This time to estimate arcsin of 1 10th
within 10 to the negative 10.
Now, I won't go through the details of the
Taylor expansion for arcsin of x.
The terms are a little bit complicated,
but
not too bad if you assume that they're
given.
What matters is the Taylor Error Bound,
that e sub n is less than constant C
over n plus 1 factorial times x to the n
plus 1, where x equals 1 10th.
Now, this constant C is the critical piece
of information.
It's an upper bound for the n plus first
derivative of arcsin of x for all x
between zero and 1 10th.
Now, who remembers the formula for the
n plus first derivatitve of arcsin of x.
Anybody?
I don't remember it either, and this is
the difficult part of using the Taylor
bound.
You don't necessarily know what good bound
for the n plus first derivative.
How are we going to solve this well?
If the Taylor theorem is not going to
work, and it's not an alternating series,
and I don't think I want to integrate this
function, then what do we do?
Well, we're just going to have to think.
But, if we think, well, this is not so
bad.
Look at the terms in this series.
We have a 1 10th, and then something times
1 10th cubed.
Plus something times 1 10th to the 5th,
etcetera.
It seems as though every step where we go
from n to n plus 2, we're picking up an
extra 1
10th squared.
Okay, so that's 1 over 100, but if we look
at the coefficients, the 2n plus 1 and the
products of odds over the products
of evens.
Then we're picking up another factor of 10
in the denominator.
And I claim that an plus 2, the next term
in the series is
less than the previous term, e sub n
divided by 1000.
What this means, really is that you're

picking up
three decimal places of accuracy with each
subsequent term.
And that means that if we want to get
within 10 to the negative 10th, is going
to suffice to choose N bigger than or
equal to 7.
So the first four terms that we have
represented on this slide suffice
to approximate arcsin, 1 10th within 10 to
the negative 10.
Never forget to
think, even if a Taylor Bound doesn't
work.
In general, bounding errors is just hard.
There's no getting around it.
If you're fortunate enough to have an
alternating series, then it's not so bad.
If you've got something that works with an
integral test, you're great.
And if not, you're either going to have to
resort to the
Taylor Theorem or use your head.
That brings to a close all of the main
topics of this course.
We're not quite done though.
We still have a few things to say to wrap
up this chapter of this course.
To give you the big picture, and to point
ahead to some of
the broader ideas you may encounter in
Calculus and in the world of Mathematics.

You might also like