You are on page 1of 6

Polynomial Approximation by Least

Squares
Distance in a Vector Space
The 2-norm of an integrable function f over an interval [a, b] is
f
2
2
=
_
a
b
f (x)
2
x.
Least square approximation
f - p
n

2
2
=
_
a
b
( f (x) - p
n
(x))
2
x
with respect to all polynomials of degree n.
We use the approximation
p
n
(x) = _
i=0
n
a
i
x
i
and
f - p
n

2
2
=
_
a
b
f (x) - _
i=0
n
a
i
x
i
2
x.
The minimum value of the error follows by
d
d a
j
f - p
n

2
2
= 0 =
d
d a
j
_
a
b
f (x) - _
i=0
n
a
i
x
i
2
x
which is
0 =
d
d a
j
_
a
b
f (x)
2
- 2 _
i=0
n
a
i
x
i
f (x) + _
i=0
n
a
i
x
i
2
x
0 =
d
d a
j
_
a
b
f (x)
2
x - 2
d
d a
j
_
i=0
n
a
i _
a
b
x
i
f (x) x +
d
d a
j
_
a
b
_
i=0
n
a
i
x
i
2
x
0 = -2
_
a
b
x
j
f (x) x + 2 _
i=0
n
a
i _
a
b
x
i
x
j
x
0 = -b
j
+ _
i=1
n+1
a
i
c
j,i
with j = 0, 1, 2, , n
_
i=1
n+1
a
i
c
j,i
= b
j
with j = 0, 1, 2, , n
|
2 | Lecture_004.nb 2012 Dr. G. Baumann
Matrix Representation
System of determining the coefficients,
c
11
c
12
c
13
c
1,n+1
c
21
c
22
c
23
c
2,n+1

c
n+1,1
c
n+1,2
c
n+1,3
c
n+1,n+1
a
0
a
1

a
n
=
b
0
b
1

b
n
.
where
c
i, j
=
_
a
b
x
i+j-2
x =
1
i + j - 1
|b
i+j-1
- a
i+j-1
]
and
b
i
=
_
a
b
x
i
f (x) x.
While this looks like an easy and straightforward solution to the problem, there are some issues of concern.
|
2012 Dr. G. Baumann Lecture_004.nb | 3
Example: Conditioning of Least Square Approximation
If we take a = 0, b = 1, then in the equation of least square approximation is determined by
c
ij
=
1
i + j - 1
and the matrix is a Hilbert matrix, which is very ill-conditioned. We should therefore expect that any attempt to
solve the full least square problem or the discrete version is likely to yield disappointing results.
|
4 | Lecture_004.nb 2012 Dr. G. Baumann
Example: Least Square Approximation
Given f (x) =
x
with x [0, 1]. Find the best approximation using least square approximation to find a second
order polynomial p
2
(x). The matrix c is given by
c
ij
=
1
i + j + 1
with i, j = 0, 1, 2, , n
b
j
=
]
0
1
f (x) x
j
x with j = 0, 1, 2, , n
The polynomial is
p
2
(x) = a
0
+ a
1
x + a
2
x
2
First let us generate the matrix c by
In[1]:=
c Table]
1
i j 1
, {i, 0, 2}, {j, 0, 2}; c // MatrixForm
Out[1]//MatrixForm=
1
1
2
1
3
1
2
1
3
1
4
1
3
1
4
1
5
The quantities b
j
can be collected in a vector of length 3
In[2]:=
b Table]
_
0
1
x
j

x
x, {j, 0, 2}
Out[2]=
1 , 1, 2
The determining system of the coefficients are
In[4]:=
eqs Thread[c.{a0, a1, a2} b]; eqs // TableForm
Out[4]//TableForm=
a0
a1
2

a2
3
1
a0
2

a1
3

a2
4
1
a0
3

a1
4

a2
5
2
The solution for the coefficients a
j
follows by applying Gau-elimination
In[6]:=
sol Solve[eqs, {a0, a1, a2}] // Flatten
Out[6]=
a0 3 35 13 , a1 588 216 , a2 570 210
Inserting the found solution into the polynomial p
2
(x) delivers
2012 Dr. G. Baumann Lecture_004.nb | 5
In[7]:=
p2 a0 a1 x a2 x
2
/. sol
Out[7]=
3 35 13 588 216 x 570 210 x
2
A graphical comparison of the function f (x) and p
2
(x) shows the quality of the approximation
In[8]:=
Plot[Evaluate[{
x
, p2}], {x, 0, 1}, AxesLabel {"x", "f(x),p
2
(x)"}]
Out[8]=
0.2 0.4 0.6 0.8 1.0
x
1.5
2.0
2.5
f(x), p
2
(x)
The absolute deviation of p
2
(x) from f(x) is shown in the following graph giwing the error of the approximation
In[10]:=
Plot[Evaluate[Abs[
x
p2]], {x, 0, 1}, AxesLabel {"x", "|f(x)p
2
(x)|"}]
Out[10]=
0.2 0.4 0.6 0.8 1.0
x
0.002
0.004
0.006
0.008
0.010
0.012
0.014
f(x)-p
2
(x)
|
6 | Lecture_004.nb 2012 Dr. G. Baumann

You might also like