You are on page 1of 117

First Edition, 2009

ISBN 978 93 80168 25 8

© All rights reserved.

Published by:

Global Media
1819, Bhagirath Palace,
Chandni Chowk, Delhi-110 006
Email: globalmedia@dkpd.com
Table of Contents

1. Linear Algebra

2. Matrix Calculus

3. Wavelet Analysis

4. Stochastic Processes

5. Optimization

6. Integral Transforms

7. Mathematical Tables

8. Statistical Tables
Vectors and Scalars
A scalar is a single number value, such as 3, 5, or 10. A vector is an ordered set of
scalars.

A vector is typically described as a matrix with a row or column size of 1. A vector with
a column size of 1 is a row vector, and a vector with a row size of 1 is a column vector.

[Column Vector]

[Row Vector]

A "common vector" is another name for a column vector, and this book will simply use
the word "vector" to refer to a common vector.

Vector Spaces
A vector space is a set of vectors and two operations (addition and multiplication,
typically) that follow a number of specific rules. We will typically denote vector spaces
with a capital-italic letter: V, for instance. A space V is a vector space if all the following
requirements are met. We will be using x and y as being arbitrary vectors in V. We will
also use c and d as arbitrary scalar values. There are 10 requirements in all:

Given:

1. There is an operation called "Addition" (signified with a "+" sign) between two
vectors, x + y, such that if both the operands are in V, then the result is also in V.
2. The addition operation is commutative for all elements in V.
3. The addition operation is associative for all elements in V.
4. There is a neutral element, φ, in V, such that x + φ = x. This is also called a one
element.
5. For every x in V, then there is a negative element -x in V.
6.
7. c(x + y) = cx + cy
8. (c + d)x = cx + dx
9. c(dx) = cdx
10. 1 × x = x

Some of these rules may seem obvious, but that's only because they have been generally
accepted, and have been taught to people since they were children.

Vector Basics
Scalar Product
A scalar product is a special type of operation that acts on two vectors, and returns a
scalar result. Scalar products are denoted as an ordered pair between angle-brackets:
<x,y>. A scalar product between vectors must satisify the following four rules:

1.
2. , only if x = 0
3.
4.

If an operation satisifes all these requirements, then it is a scalar product.

Examples

One of the most common scalar products is the dot product, that is discussed commonly
in Linear Algebra

Norm
The norm is an important scalar quantity that indicates the magnitude of the vector.
Norms of a vector are typically denoted as . To be a norm, an operation must satisfy
the following four conditions:

1.
2. only if x = 0.
3.
4.

A vector is called normal if it's norm is 1. A normal vector is sometimes also referred to
as a unit vector. Both notations will be used in this book. To make a vector normal, but
keep it pointing in the same direction, we can divide the vector by it's norm:
Examples

One of the most common norms is the cartesian norm, that is defined as the square-root
of the sum of the squares:

Unit Vector

A vector is said to be a unit vector if the norm of that vector is 1.

Orthogonality
Two vectors x and y are said to be orthogonal if the scalar product of the two is equal to
zero:

Two vectors are said to be orthonormal if their scalar product is zero, and both vectors
are unit vectors.

Cauchy-Schwartz Inequality
The cauchy-schwartz inequality is an important result, and relates the norm of a vector to
the scalar product:

Metric (Distance)
The distance between two vectors in the vector space V, called the metric of the two
vectors, is denoted by d(x, y). A metric operation must satisfy the following four
conditions:

1.
2. d(x,y) = 0 only if x = y
3. d(x,y) = d(y,x)
4.

Examples

A common form of metric is the distance between points a and b in the cartesian plane:
Linear Independence and Basis
Linear Independance
A set of vectors are said to be linearly dependant on one another if
any vector v from the set can be constructed from a linear combination of the other
vectors in the set. Given the following linear equation:

The set of vectors V is linearly independant only if all the a coefficients are zero. If we
combine the v vectors together into a single row vector:

And we combine all the a coefficients into a single column vector:

We have the following linear equation:

We can show that this equation can only be satisifed for , the matrix must be
invertable:

Remember that for the matrix to be invertable, the determinate must be non-zero.

Non-Square Matrix V

If the matrix is not square, then the determinate can not be taken, and therefore the
matrix is not invertable. To solve this problem, we can premultiply by the transpose
matrix:

And then the square matrix must be invertable:


Rank

The rank of a matrix is the largest number of linearly independant rows or columns in the
matrix.

To determine the Rank, typically the matrix is reduced to row-echelon form. From the
reduced form, the number of non-zero rows, or the number of non-zero colums
(whichever is smaller) is the rank of the matrix.

If we multiply two matrices A and B, and the result is C:

AB = C

Then the rank of C is the minimum value between the ranks A and B:

Span
A Span of a set of vectors V is the set of all vectors that can be created by a linear
combination of the vectors.

Basis
A basis is a set of linearly-independant vectors that span the entire vector space.

Basis Expansion

If we have a vector , and V has basis vectors , by definition, we can


write y in terms of a linear combination of the basis vectors:

or

If is invertable, the answer is apparent, but if is not invertable, then we can perform
the following technique:
And we call the quantity the left-pseudoinverse of .

Change of Basis

Frequently, it is useful to change the basis vectors to a different set of vectors that span
the set, but have different properties. If we have a space V, with basis vectors and a
vector in V called x, we can use the new basis vectors to represent x:

or,

If V is invertable, then the solution to this problem is simple.

Grahm-Schmidt Orthogonalization
If we have a set of basis vectors that are not orthogonal, we can use a process known as
orthogonalization to produce a new set of basis vectors for the same space that are
orthogonal:

Given:
Find the new basis
Such that

We can define the vectors as follows:

1. w1 = v1

2.

Notice that the vectors produced by this technique are orthogonal to each other, but they
are not necessarily orthonormal. To make the w vectors orthonormal, you must divide
each one by it's norm:
Reciprocal Basis
A Reciprocal basis is a special type of basis that is related to the original basis. The
reciprocal basis can be defined as:

Linear Transformations
A linear transformation is a matrix M that operates on a vector in space V, and results in a
vector in a different space W. We can define a transformation as such:

In the above equation, we say that V is the domain space of the transformation, and W is
the range space of the transformation. Also, we can use a "function notation" for the
transformation, and write it as:

M(x) = Mx = y

Where x is a vector in V, and y is a vector in W. To be a linear transformation, the


principle of superposition must hold for the transformation:

M(av1 + bv2) = aM(v1) + bM(v2)

Where a and b are arbitary scalars.

Null Space
The Nullspace of an equation is the set of all vectors x for which the following
relationship holds:

Mx = 0

Where M is a linear transformation matrix. Depending on the size and rank of M, there
may be zero or more vectors in the nullspace. Here are a few rules to remember:

1. If the matrix M is invertable, then there is no nullspace.


2. The number of vectors in the nullspace (N) is the difference between the rank(R)
of the matrix and the number of columns(C) of the matrix:

N=R−C
If the matrix is in row-eschelon form, the number of vectors in the nullspace is given by
the number of rows without a leading 1 on the diagonal. For every column where there is
not a leading one on the diagonal, the nullspace vectors can be obtained by placing a
negative one in the leading position for that column vector.

We denote the nullspace of a matrix A as:

Linear Equations
If we have a set of linear equations in terms of variables x, scalar coefficients a, and a
scalar result b, we can write the system in matrix notation as such:

Ax = b

Where x is a m × 1 vector, b is an n &times 1 vector, and A is an n × m matrix.


Therefore, this is a system of n equations with m unknown variables. There are 3
possibilities:

1. If Rank(A) is not equal to Rank([A b]), there is no solution


2. If Rank(A) = Rank([A b]) = n, there is exactly one solution
3. If Rank(A) = Rank([A b]) < n, there are infinately many solutions.

Complete Solution

The complete solution of a linear equation is given by the sum of the homogeneous
solution, and the particular solution. The homogeneous solution is the nullspace of the
transformation, and the particular solution is the values for x that satisfy the equation:

A(x) = b
A(xh + xp) = b

Where

xh is the homogeneous solution, and is the nullspace of A that satisfies the


equation A(xh) = 0
xp is the particular solution that satisfies the equation A(xp) = b

Minimum Norm Solution

If Rank(A) = Rank([A b]) < n, then there are infinately many solutions to the linear
equation. In this situation, the solution called the minimum norm solution must be
found. This solution represents the "best" solution to the problem. To find the minimum
norm solution, we must minimize the norm of x subject to the constraint of:
Ax − b = 0

There are a number of methods to minimize a value according to a given constraint, and
we will talk about them later.

Least-Squares Curve Fit


If Rank(A) doesnt equal Rank([A b]), then the linear equation has no solution. However,
we can find the solution which is the closest. This "best fit" solution is known as the
Least-Squares curve fit.

We define an error quantity E, such that:

Our job then is to find the minimum value for the norm of E:

We do this by differentiating with respect to x, and setting the result to zero:

Solving, we get our result:

x = (ATA) − 1ATb

Minimization
Khun-Tucker Theorem
The Khun-Tucker Theorem is a method for minimizing a function f(x) under the
constraint g(x). We can define the theorem as follows:

Where Λ is the lagrangian vector, and < , > denotes the scalar product operation. We
will discuss scalar products more later. If we differentiate this equation with respect to x
first, and then with respect to Λ, we get the following two equations:
We have the final result:

x = AT[AAT] − 1b

Projection
The projection of a vector onto the vector space is the minimum distance
between v and the space W. In other words, we need to minimize the distance between
vector v, and an arbitrary vector :

[Projection onto space W]

For every vector there exists a vector called the projection of v onto W
such that <v-w, p> = 0, where p is an arbitrary element of W.

Orthogonal Complement

Distance between v and W


The distance between and the space W is given as the minimum distance between
v and an arbitrary :
Intersections
Given two vector spaces V and W, what is the overlapping area between the two? We
define an arbitrary vector z that is a component of both V, and W:

Where N is the nullspace.

Linear Spaces
Linear Spaces are like Vector Spaces, but are more general. We will define Linear
Spaces, and then use that definition later to define Function Spaces.

If we have a space X, elements in that space f and g, and scalars a and b, the following
rules must hold for X to be a linear space:

1.
2. f + g = g + f
3. There is a null element φ such that φ + f = f.
4.
5. f + (-f) = φ

Matrices
Derivatives
Consider the following set of linear equations:

a = bx1 + cx2
d = ex1 + fx2

We can define the matrix A to represent the coefficients, the vector B as the results, and
the vector x as the variables:
And rewriting the equation in terms of the matrices, we get:

B = Ax

Now, let's say we want the derivative of this equation with respect to the vector x:

We know that the first term is constant, so the derivative of the left-hand side of the
equation is zero. Analyzing the right side shows us:

Pseudo-Inverses
There are special matrices known as pseudo-inverses, that satisfies some of the
properties of an inverse, but not others. To recap, If we have two square matrices A and
B, that are both n × n, then if the following equation is true, we say that A is the inverse
of B, and B is the inverse of A:

AB = BA = I

Right Pseudo-Inverse

Consider the following matrix:

R = AT[AAT] − 1

We call this matrix R the right pseudo-inverse of A, because:

AR = I

but

We will denote the right pseudo-inverse of A as


Left Pseudo-Inverse

Consider the following matrix:

L = [ATA] − 1AT

We call L the left pseudo-inverse of A because

LA = I

but

We will denote the left pseudo-inverse of A as

Matrix Forms
Matrices that follow certain predefined formats are useful in a number of computations.
We will discuss some of the common matrix formats here. Later chapters will show how
these formats are used in calculations and analysis.

Diagonal Matrix
A diagonal matrix is a matrix such that:

In otherwords, all the elements off the main diagonal are zero, and the diagonal elements
may be (but don't need to be) non-zero.

Companion Form Matrix


If we have the following characteristic polynomial for a matrix:

We can create a companion form matrix in one of two ways:


Or, we can also write it as:

Jordan Canonical Form


To discuss the Jordan canonical form, we first need to introduce the idea of the Jordan
Block:

Jordan Blocks

A jordan block is a square matrix such that all the diagonal elements are equal, and all the
super-diagonal elements (the elements directly above the diagonal elements) are all 1. To
illustrate this, here is an example of an n-dimensional jordan block:

Canonical Form
A square matrix is in Jordan Canonical form, if it is a diagonal matrix, or if it has one
of the following two block-diagonal forms:

Or:

where the D element is a diagonal block matrix, and the J blocks are in Jordan block
form.

Quadratic Forms
If we have an n × 1 vector x, and an n × n symmetric matrix M, we can write:

xTMx = a

Where a is a scalar value. Equations of this form are called quadratic forms.

Matrix Definiteness
Based on the quadratic forms of a matrix, we can create a certain number of categories
for special types of matrices:

1. if xTMx > 0 for all x, then the matrix is positive definate.


2. if for all x, then the matrix is positive semi-definate.
3. if xTMx < 0 for all x, then the matrix is negative definate.
4. if for all x, then the matrix is negative semi-definate.

These classifications are used commonly in control engineering.


Eigenvalues and Eigenvectors
The Eigen Problem
This page is going to talk about the concept of Eigenvectors and Eigenvalues, which are
important tools in linear algebra, and which play an important role in State-Space control
systems. The "Eigen Problem" stated simply, is that given a square matrix A which is n ×
n, there exists a set of n scalar values λ and n corresponding non-trivial vectors v such
that:

Av = λv

We call λ the eigenvalues of A, and we call v the corresponding eigenvectors of A. We


can rearrange this equation as:

(A − λI)v = 0

For this equation to be satisfied so that v is non-trivial, the matrix (A - λI) must be
singular. That is:

| A − λI | = 0

Characteristic Equation
The characteristic equation of a square matrix A is given by:

[Characteristic Equation]

| A − λI | = 0

Where I is the identity matrix, and λ is the set of eigenvalues of matrix A. From this
equation we can solve for the eigenvalues of A, and then using the equations discussed
above, we can calculate the corresponding eigenvectors.

In general, we can expand the characteristic equation as:

[Characteristic Polynomial]

This equation satisfies the following properties:

1. | A | = ( − 1)nc0
2. A is nonsingular if c0 is non-zero.
Example: 2 × 2 Matrix

Let's say that X is a square matrix of order 2, as such:

Then we can use this value in our characteristic equation:

(a − λ)(d − λ) − (b)(c) = 0

The roots to the above equation (the values for λ that satisifies the equality) are the
eigenvalues of X.

Eigenvalues
The solutions, λ, of the characteristic equation for matrix X are known as the eigenvalues
of the matrix X.

Eigenvalues satisfy the following properties:

1. If λ is an eigenvalue of A, λn is an eigenvalue of An.


2. If λ is a complex eigenvalue of A, then λ* (the complex conjugate) is also an
eigenvalue of A.
3. If any of the eigenvalues of A are zero, then A is singular. If A is non-singular, all
the eigenvalues of A are nonzero.

Eigenvectors
The characteristic equation can be rewritten as such:

Xv = λv

Where X is the matrix under consideration, and λ are the eigenvalues for matrix X. For
every unique eigenvalue, there is a solution vector v to the above equation, known as an
eigenvector. The above equation can also be rewritten as:

| X − λI | v = 0

Where the resulting values of v for each eigenvalue λ is an eigenvector of X. There is a


unique eigenvector for each unique eigenvalue of X. From this equation, we can see that
the eigenvectors of A form the nullspace:
And therefore, we can find the eigenvectors through row-reduction of that matrix.

Eigenvectors satisfy the following properties:

1. If v is a complex eigenvector of A, then v* (the complex conjugate) is also an


eigenvector of A.
2. Distinct eigenvectors of A are linearly independant.
3. If A is n × n, and if there are n distinct eigenvectors, then the eigenvectors of A
form a complete basis set for

Generalized Eigenvectors
Let's say that matrix A has the following characteristic polynomial:

Where d1, d2, ... , ds are known as the algebraic multiplicity of the eigenvalue λi. Also
note that d1 + d2 + ... + ds = n, and s < n. In other words, the eigenvalues of A are
repeated. Therefore, this matrix doesnt have n distinct eigenvectors. However, we can
create vectors known as generalized eigenvectors to make up the missing eigenvectors
by satisfying the following equations:

(A − λI)kvk = 0
(A − λI)k − 1vk = 0

Right and Left Eigenvectors


The equation for determining eigenvectors is:

(A − λI)v = 0

And because the eigenvector v is on the right, these are more appropriately called "right
eigenvectors". However, if we rewrite the equation as follows:

u(A − λI) = 0

The vectors u are called the "left eigenvectors" of matrix A.


Diagonalization
Similarity
Matrices A and B are said to be similar to one another if there exists an invertable matrix
T such that:

T − 1AT = B

If there exists such a matrix T, the matrices are similar. Similar matrices have the same
eigenvalues. If A has eigenvectors v1, v2 ..., then B has eigenvectors u given by:

ui = Tvi

Matrix Diagonalization
Some matricies are similar to diagonal matrices using a transition matrix, T. We will
say that matrix A is diagonalizable if the following equation can be satisfied:

T − 1AT = D

Where D is a diagonal matrix. An n × n square matrix is diagonalizable if and only if it


has n linearly independant eigenvectors.

Transition Matrix
If an n × n square matrix has n distinct eigenvalues λ, and therefore n distinct
eigenvectors v, we can create a transition matrix T as:

T = [v1v2...vn]

And transforming matrix X gives us:

Therefore, if the matrix has n distinct eigenvalues, the matrix is diagonalizable, and the
diagonal entries of the diagonal matrix are the corresponding eigenvalues of the matrix.

Complex Eigenvalues
Consider the situation where a matrix A has 1 or more complex conjugate eigenvalue
pairs. The eigenvectors of A will also be complex. The resulting diagonal matrix D will
have the complex eigenvalues as the diagonal entries. In engineering situations, it is often
not a good idea to deal with complex matrices, so other matrix transformations can be
used to create matrices that are "nearly diagonal".

Generalized Eigenvectors
If the matrix A does not have a complete set of eigenvectors, that is, that they have d
eigenvectors and n - d generalized eigenvectors, then the matrix A is not diagonalizable.
However, the next best thing is acheived, and matrix A can be transformed into a Jordan
Cannonical Matrix. Each set of generalized eigenvectors that are formed from a single
eigenvector basis will create a jordan block. All the distinct eigenvectors that do not
spawn any generalized eigenvectors will form a diagonal block in the Jordan matrix.

Spectral Decomposition
If λi are are the n distinct eigenvalues of matrix A, and vi are the corresponding n distinct
eigenvectors, and if wi are the n distinct left-eigenvectors, then the matrix A can be
represented as a sum:

this is known as the spectral decomposition of A.

Error Estimation
Consider a scenario where the matrix representation of a system A differs from the actual
implementation of the system by a factor of ∆A. In other words, our system uses the
matrix:

A + ∆A

From the study of Control Systems, we know that the values of the eigenvectors can
affect the stability of the system. For that reason, we would like to know how a small
error in A will affect the eigenvalues.

First off, we assume that ∆A is a small shift. The definition of "small" in this sense is
arbitrary, and will remained undefined. Keep in mind that the techniques discussed here
are more accurate the smaller ∆A is.

If ∆A is the error in the matrix A, then ∆λ is the error in the eigenvalues and ∆v is the
error in the eigenvectors. The characteristic equation becomes:
(A + ∆A)(v + ∆v) = (λ + ∆λ)(v + ∆v)

We have an equation now with two unknowns: ∆λ and ∆v. In other words, we dont know
how a small change in A will affect the eigenvalues and eigenvectors. If we multiply out
both sides, we get:

Av + ∆Av + v∆A + ∆v∆A = λv + ∆λv + v∆λ + ∆λ∆v

This situation seems hopeless, until we pre-multiply both sides by the corresponding left-
eigenvalue w:

wTAv + wT∆Av + wTv∆A + wT∆v∆A = wTλv + wT∆λv + wTv∆λ + wT∆λ∆v

Terms where two ∆ errors (which are known to be small, by definition) are multipled
together, we can say are negligible, and set them to zero. Also, we know from our right-
eigenvalue equation that:

wTA = λwT

Another fact is that the right-eigenvalues and left eigenvalues are orthogonal to each
other, so the following result holds:

wTv = 0

Substituting these results, where necessary, into our long equation above, we get the
following simplification:

wT∆Av = ∆λwT∆v

And solving for the change in the eigenvalue gives us:

This approximate result is only good for small values of ∆A, and the result is less precise
as the error increases.
Matrix Functions
If we have functions, and we use a matrix as the input to those functions, the output
values are not always intuitive. For instance, if we have a function f(x), and as the input
argument we use matrix A, the output matrix is not necessarily the function f applied to
the individual elements of A.

Diagonal Matrix
In the special case of diagonal matrices, the result of f(A) is the function applied to each
element of the diagonal matrix:

Then the function f(A) is given by:

Jordan Cannonical Form


Matrices in Jordan Cannonical form also have an easy way to compute the functions of
the matrix. However, this method is not nearly as easy as the diagonal matrices described
above.

If we have a matrix in Jordan Block form, A, the function f(A) is given by:
The matrix indices have been removed, because in Jordan block form, all the diagonal
elements must be equal.

If the matrix is in Jordan Block form, the value of the function is given as the function
applied to the individual diagonal blocks.

Cayley Hamilton Theorem


If the characteristic equation of matrix A is given by:

Then the Cayley-Hamilton theorem states that the matrix A itself is also a valid solution
to that equation:

Another theorem worth mentioning here (and by "worth mentioning", we really mean
"fundamental for some later topics") is stated as:

If λ are the eigenvalues of matrix A, and if there is a function f that is defined as a linear
combination of powers of λ:

If this function has a radius of convergence S, and if all the eigenvectors of A have
magnitudes less then S, then the matrix A itself is also a solution to that function:

Matrix Exponential
If we have a matrix A, we can raise that matrix to a power of e as follows:

eA

It is important to note that this is not necessarily (not usually) equal to each individual
element of A being raised to a power of e. Using taylor-series expansion of exponentials,
we can show that:
.

In other words, the matrix exponential can be reducted to a sum of powers of the matrix.
This follows from both the taylor series expansion of the exponential function, and the
cayley-hamilton theorem discussed previously.

However, this infinite sum is expensive to compute, and because the sequence is infinite,
there is no good cut-off point where we can stop computing terms and call the answer a
"good approximation". To alleviate this point, we can turn to the Cayley-Hamilton
Theorem. Solving the Theorem for An, we get:

Multiplying both sides of the equation by A, we get:

We can substitute the first equation into the second equation, and the result will be An+1
in terms of the first n - 1 powers of A. In fact, we can repeat that process so that Am, for
any arbitrary high power of m can be expressed as a linear combination of the first n - 1
powers of A. Applying this result to our exponential problem:

Where we can solve for the α terms, and have a finite polynomial that expresses the
exponential.

Inverse
The inverse of a matrix exponential is given by:

(eA) − 1 = e − A

Derivative
The derivative of a matrix exponential is:

Notice that the exponential matrix is commutative with the matrix A. This is not the case
with other functions, necessarily.
Sum of Matrices
If we have a sum of matrices in the exponent, we cannot separate them:

Differential Equations
If we have a first-degree differential equation of the following form:

x'(t) = Ax(t) + f(x)

With initial conditions

x(t0) = c

Then the solution to that equation is given in terms of the matrix exponential:

This equation shows up frequently in control engineering.

Laplace Transform
As a matter of some interest, we will show the Laplace Transform of a matrix exponential
function:

We will not use this result any further in this book, although other books on engineering
might make use of it.

Lyapunov Equation
[Lyapunov's Equation]

AM + MB = C

Where A, B and C are constant square matrices, and M is the solution that we are trying
to find. If A, B, and C are of the same order, and if A and B have no eigenvalues in
common, then the solution can be given in terms of matrix exponentials:
Function Spaces
A function space is a linear space where all the elements of the space are functions. A
function space that has a norm operation is known as a normed function space. The
spaces we consider will all be normed.

Continuity
f(x) is continuous at x0 if, for every ε > 0 there exists a δ(ε) > 0 such that |f(x) - f(x0)| <
&epsilon when |x - x0| < δ(ε).

Common Function Spaces


Here is a listing of some common function spaces. This is not an exhaustive list.

C Space

The C function space is the set of all functions that are continuous.

The metric for C space is defined as:

Consider the metric of sin(x) and cos(x):

Cp Space

The Cp is the set of all continuous functions for which the first p derivatives are also
continuous. If the function is called "infinitely continuous. The set is the set
of all such functions. Some examples of functions that are infinitely continuous are
exponentials, sinusoids, and polynomials.

L Space

The L space is the set of all functions that are finitely integrable over a given interval [a,
b].
f(x) is in L(a, b) if:

L p Space

The Lp space is the set of all functions that are finitely integrable over a given interval [a,
b] when raised to the power p:

Most importantly for engineering is the L2 space, or the set of functions that are "square
integrable".

L2 Space
The L2 space is very important to engineers, because functions in this space do not need
to be continuous. Many discontinuous engineering functions, such as the delta (impulse)
function, the unit step function, and other discontinuous finctions are part of this space.

L2 Functions
A large number of functions qualify as L2 functions, including uncommon, discontinuous,
piece-wise, and other functions. A function which, over a finite range, has a finite number
of discontinuties is an L2 function. For example, a unit step and an impulse function are
both L2 functions. Also, other functions useful in signal analysis, such as square waves,
triangle waves, wavelets, and other functions are L2 functions.

In practice, most physical systems have a finite amount of noise associated with them.
Noisy signals and random signals, if finite, are also L2 functions: this makes analysis of
those functions using the techniques listed below easy.

Null Function
The null functions of L2 are the set of all functions φ in L2 that satisfy the equation:

for all a and b.


Norm
The L2 norm is defined as follows:

[L2 Norm]

If the norm of the function is 1, the function is normal.

We can show that the derivative of the norm squared is:

Scalar Product
The scalar product in L2 space is defined as follows:

[L2 Scalar Product]

If the scalar product of two functions is zero, the functions are orthogonal.

We can show that given coefficient matrices A and B, and variable x, the derivative of the
scalar product can be given as:

We can recognize this as the product rule of differentiation. Generalizing, we can say
that:

We can also say that the derivative of a matrix A times a vector x is:
Metric
The metric of two functions (we will not call it the "distance" here, because that word has
no meaning in a function space) will be denoted with ρ(x,y). We can define the metric of
an L2 function as follows:

[L2 Metric]

Cauchy-Schwartz Inequality
The Cauchy-Schwartz Inequality still holds for L2 functions, and is restated here:

Linear Independance
A set of functions in L2 are linearly independant if:

If and only if all the a coefficients are 0.

Grahm-Schmidt Orthogonalization
The Grahm-Schmidt technique that we discussed earlier still works with functions, and
we can use it to form a set of linearly independant, orthogonal functions in L2.

For a set of functions φ, we can make a set of orthogonal functions ψ that space the same
space but are orthogonal to one another:

[Grahm-Schmidt Orthogonalization]

ψ1 = φ1

Basis
The L2 is an infinite-basis set, which means that any basis for the L2 set will require an
infinite number of basis functions. To prove that an infinite set of orthogonal functions is
a basis for the L2 space, we need to show that the null function is the only function in L2
that is orthogonal to all the basis functions. If the null function is the only function that
satisfies this relationship, then the set is a basis set for L2.

By definition, we can express any function in L2 as a linear sum of the basis elements. If
we have basis elements φ, we can define any other function ψ as a linear sum:

We will explore this important result in the section on Fourier Series.

Banach and Hilbert Spaces


There are some special spaces known as Banach spaces, and Hilbert spaces.

Convergent Functions
Let's define the piece-wise function φ(x) as:

We can see that as we set , this function becomes the unit step function. We can
say that as n approaches infinity, that this function converges to the unit step function.
Notice that this function only converges in the L2 space, because the unit step function
does not exist in the C space (it is not continuous).

Convergence

We can say that a function φ converges to a function φ* if:

We can call this sequences, and all such sequences that converge to a given function as n
approaches infinity a cauchy sequence.

Complete Function Spaces


A function space is called complete if all sequences in that space converge to another
function in that space.

Banach Space
A Banach Space is a complete normed function space.

Hilbert Space
A Hilbert Space is a Banach Space with respect to a norm induced by the scalar product.
That is, if there is a scalar product in the space X, then we can say the norm is induced by
the scalar product if we can write:

That is, that the norm can be written as a function of the scalar product. In the L2 space,
we can define the norm as:

If the scalar product space is a Banach Space, if the norm space is also a Banach space.

In a Hilbert Space, the Parallelogram rule holds for all members f and g in the function
space:

The L2 space is a Hilbert Space. The C space, however, is not.

Fourier Series
The L2 space is an infinite function space, and therefore a linear combination of any
infinite set of orthogonal functions can be used to represent any single member of the L2
space. The decomposition of an L2 function in terms of an infinite basis set is a technique
known as the Fourier Decomposition of the function, and produces a result called the
Fourier Series.

Fourier Basis
Let's consider a set of L2 functions, φ as follows:

φ = 1,sin(nπx),cos(nπx),n = 1,2,...
We can prove that over a range [a, b] = [0, 2\pi], all of these functions are orthogonal:

And both the sinusoidal functions are orthogonal with the function φ(x) = 1. Because this
serves as an infinite orthogonal set in L2, this is also a valid basis set in that space.
Therefore, we can decompose any function in L2 as the following sum:

[Classical Fourier Series]

However, the difficulty occurs when we need to calculate the a and b coefficients. We
will show the method to do this below:

a0: The Constant Term


Calculation of a0 is the easiest, and therefore we will show how to calculate it first. We
first create an error function, E, that is equal to the squared norm of the difference
between the function f(x) and the infinite sum above:

For ease, we will write all the basis functions as the set φ, described above:

Combining the last two functions together, and writing the norm as an integral, we can
say:

We attempt to minimize this error function with respect to the constant term. To do this,
we differentiate both sides with respect to a0, and set the result to zero:
The &phi0 term comes out of the sum because of the chain rule: it is the only term in the
entire sum dependant on a0. We can separate out the integral above as follows:

All the other terms drop out of the infinite sum because they are all orthogonal to φ0.
Again, we can rewrite the above equation in terms of the scalar product:

And solving for a0, we get our final result:

Sin Coefficients
Using the above method, we can solve for the an coefficients of the sin terms:

Cos Coefficients
Also using the above method, we can solve for the bn terms of the cos term.

Arbitrary Basis Expansion


The classical Fourier series uses the following basis:

φ(x) = 1,sin(nπx),cos(nπx),n = 1,2,...

However, we can generalize this concept to extend to any orthogonal basis set from the
L2 space.
We can say that if we have our orthogonal basis set that is composed of an infinite set of
arbitrary, orthogonal L2 functions:

We can define any L2 function f(x) in terms of this basis set:

[Generalized Fourier Series]

Using the method from the previous chapter, we can solve for the coefficients as follows:

[Generalized Fourier Coefficient]

Bessel Equation and Parseval Theorem


Bessel's equation relates the original function to the fourier coefficients an:

[Bessel's Equation]

If the basis set is infinitely orthogonal, and if an infinite sum of the basis functions
perfectly reproduces the function f(x), then the above equation will be an equality, known
as Parseval's Theorem:

[Parseval's Theorem]

Engineers may recognize this as a relationship between the energy of the signal, as
represented in the time and frequency domains. However, parseval's rule applies not only
to the classical Fourier series coefficients, but also to the generalized series coefficients
as well.
Multi-Dimensional Fourier Series
The concept of the fourier series can be expanded to include 2-dimensional and n-
dimensional function decomposition as well. Let's say that we have a function in terms of
independant variables x and y. We can decompose that function as a double-summation
as follows:

Where φij is a 2-dimensional set of orthogonal basis functions. We can define the
coefficients as:

This same concept can be expanded to include series with n-dimensions.


Wavelets
Wavelets are orthogonal basis functions that only exist for certain windows in time. This
is in contrast to sinusoidal waves, which exist for all times t. A wavelet, because it is
dependant on time, can be used as a basis function. A wavelet basis set gives rise to
wavelet decomposition, which is a 2-variable decomposition of a 1-variable function.
Wavelet analysis allows us to decompose a function in terms of time and frequency,
while fourier decomposition only allows us to decompose a function in terms of
frequency.

Mother Wavelet
If we have a basic wavelet function ψ(t), we can write a 2-dimensional function known as
the mother wavelet function as such:

ψjk = 2j / 2ψ(2jt − k)

Wavelet Series
If we have our mother wavelet function, we can write out a fourier-style series as a
double-sum of all the wavelets:

Scaling Function
Sometimes, we can add in an additional function, known as a scaling function:

The idea is that the scaling function is larger then the wavelet functions, and occupies
more time. In this case, the scaling function will show long-term changes in the signal,
and the wavelet functions will show short-term changes in the signal.
Random Variables
A random variable is a variable that takes a random value at any particular point t in time.
The properties of the random variable are known as the distribution of the random
variable. We will denote random variables by the abbreviation "r.v.", or simply "rv". This
is a common convention used in the literature concerning this subject.

Probability Function
The probability function, P[], will denote the probability of a particular occurance
happening. Here are some examples:

• P[X < x], the probability that the random variable X has a value less then some
variable x.
• P[X = x], the probability that the random variable X has a value equal to some
variable x.
• P[X < x,Y > y], the probability that the random variable X has a value equal to x,
and the random variable Y has a value equal to y.

Example: Fair Coin

Consider the example that a fair coin is flipped. We will define X to be the random
variable, and we will define "head" to be 1, and "tail" to be 0. What is the probability that
the coin is a head?

P[X = 1] = 50%

Example: Fair Dice

Consider now a fair 6-sided dice. X is the r.v., and the numerical value on the face of the
die is the value that X can take. What is the probability that when the dice is rolled, the
value is less then 4?

P[X < 4] = 50%

What is the probability that the value will be even?

P[X is even] = 50%

Notation
We will typically write random variables as upper-case letters, such as Z, X, Y, etc.
Lower-case letters will be used to denote variables that are related with the random
variables. For instance, we will use "x" as a variable that is related to "X", the random
variable.
When we are using random variables in conjunction with matrices, we will use the
following conventions:

1. Random variables, and random vectors or matrices will be denoted with letters
from the end of the alphabet, such as W, X, Y, and Z. Also, Θ and Ω will be used
as a random variables, especially when we talk about random frequencies.
2. A random matrix or vector, will be denoted with a capital letter. The entries in
that random vector or matrix will be denoted with capital letters and subscripts.
These matrices will also use letters from the end of the alphabet, or the greek
letters Θ and Ω.
3. A regular coefficient vector or matrix that is not random will use a capital matrix
from the beginning of the alphbet, such as A, B, C, or D.
4. Special vectors or matrices that are derived from random variables, such as
correlation matrices, or covariance matrices, will use capital letters from the
middle of the alphabet, such as K, M, N, P, or Q.

Any other variables or notations will be explained in the context of the page where it
appears.

Conditional Probability
A conditional probability is the probability measure of one event happening given that
another event already has happened. For instance, what are the odds that your computer
system will suddenly break while you are reading this page?

P[computer breaks] = small

The odds that your computer will suddenly stop working are very small. However, what
are the odds that your computer will break given that it just got struck by lightning?

P[computer breaks | struck by lightning] = large

The vertical bar separates the things that haven't happened yet (the a priori probabilities,
on the left) from the things that have already happened and might affect our outcome (the
a posteriori probabilities, on the right). As another example, what are the odds that a dice
rolled will be a 2, assuming that we know the number is less then 4?

P[X = 2 | X < 4] = 33.33%

If X is less then 4, we know it can only be one of the values 1, 2, or 3. Or another


example, what if a person asks you "I'm thinking of a number between 1 and 10", what
are your odds of guessing the right number?

P[X = x | 0 < X < 10] = 10%

Where x is the correct number that you are trying to guess.


Probability Functions
Probability Density Function
The probability density function, or pdf of a random variable is the function defined
by:

fX(x) = P[X = x]

Remember here that X is the random variable, and x is a related variable (but is not
random). The subscript X on fX denotes that this is the pdf for the X variable.

pdf's follow a few simple rules:

1. The pdf is always non-negative.


2. The area under the pdf curve is 1.

Cumulative Distribution Function


The cumulative distribution function, (CDF), is also known as the Probability
Distribution Function, (PDF). to reduce confusion with the pdf of a random variable, we
will use the acronym CDF to denote this function. The CDF of a random variable is the
function defined by:

The CDF and the pdf of a random variable are related:

The CDF is the function corresponding to the probability that a given value x is less then
the value of the random variable X. The CDF is a non-decreasing function, and is always
non-negative.

Example: X between two bounds


To determine whether our random variable X lies between two bounds, [a, b], we can
take the CDF functions:

Distributions
There are a number of common distributions that are used in conjunction with random
variables.

Uniform Distribution
The uniform distribution is one of the easiest distributions to analyze. Also, uniform
distributions of random numbers are easy to generate on computers, so they are typically
used in computer software.

Gaussian Distribution
The gaussian distribution, or the "normal distribution" is one of the most common
random distributions. A gaussian random variable is typically called a "normal" random
variable.

Where µ is the mean of the function, and σ2 is the variance of the function. we will
discuss both these terms later.

Expectation and Entropy


Expectation
The expectation operator of a random variable is defined as:
This operator is very useful, and we can use it to derive the moments of the random
variable.

Moments
A moment is a value that contains some information about the random variable. The n-
moment of a random variable is defined as:

Mean

The mean value, or the "average value" of a random variable is defined as the first
moment of the random variable:

We will use the greek letter µ to denote the mean of a random variable.

Central Moments
A central moment is similar to a moment, but it is also dependant on the mean of the
random variable:

The first central moment is always zero.

Variance

The variance of a random variable is defined as the second central moment:

E[(x − µX)2] = σ2

The square-root of the variance, σ, is known as the standard-deviation of the random


variable
Mean and Variance

The mean and variance of a random variable can be related by:

σ2 = µ2 + E[x2]

This is an important function, and we will use it later.

Entropy
The entropy of a random variable X is defined as:

SISO Transformations
Let's say that we have a random variable X that is the input into a given system. The
system output, Y is then also a random variable that is related to the input X by the
response of the system. In other words, we can say that:

Y = g(X)

Where g is the mathematical relationship between the system input and the system
output.

To discover information about Y, we can use the information we know about the r.v. X,
and the relationship g:

Where xi are the roots of g.

MISO Transformations
Consider now a system with two inputs, both of which are random (or pseudorandom, in
the case of non-deterministic data). For instance, let's consider a system with the
following inputs and outputs:

• X: non-deterministic data input


• Y: disruptive noise
• Z: System output

Our system satisfies the following mathematical relationship:

Z = g(X,Y)

Where g is the mathematical relationship between the system input, the disruptive noise,
and the system output. By knowing information about the distributions of X and Y, we
can determine the distribution of Z.

Correlation
Independance
Two random variables are called independant if changes in one do not affect, and are not
affected by, changes in the other.

Correlation
Two random variables are said to have correlation if they take the same values, or
similar values, at the same point in time. Independance implies that two random variables
will be uncorrelated, but two random variables being uncorrelated does not imply that
they are independant.

Random Vectors
Many of the concepts that we have learned so far have been dealing with random
variables. However, these concepts can all be translated to deal with vectors of random
numbers. A random vector X contains N elements, Xi, each of which is a distinct random
variable. The individual elements in a random vector may or may not be correlated or
dependent on one another.

Expectation
The expectation of a random vector is a vector of the expectation values of each element
of the vector. For instance:
Using this definition, the mean vector of random vector X, denoted µX is the vector
composed of the means of all the individual elements of X:

Correlation Matrix
The correlation matrix of a random vector X is defined as:

RX = E[XXT]

Where each element of the correlation matrix corresponds to the correlation between the
row element of X, and the column element of XT. The correlation matrix is a real-
symmetric matrix. If the off-diagonal elements of the correlation matrix are all zero, the
random vector is said to be uncorrelated. If the R matrix is an identity matrix, the random
vector is said to be "white". For instance, "white noise" is uncorrelated, and each element
of the vector has an equal correlation value.

Matrix Diagonalization

As discussed earlier, we can diagonalize a matrix by constructing the V matrix from the
eigenvectors of that matrix. If X is our non-diagonal matrix, we can create a diagonal
matrix D by:

D = V − 1XV

If the X matrix is real symmetric (as is always the case with the correlation matrix), we
can simplify this to be:

D = VTXV
Whitening

A matrix can be whitened by constructing a matrix W that contains the inverse


squareroots of the eigenvalues of X on the diagonal:

Using this W matrix, we can convert X into the identity matrix:

I = WTVTXVW

Simultaneous Diagonalization

If we have two matrices, X and Y, we can construct a matrix A that will satisfy the
following relationships:

ATXA = I
ATYA = D

Where I is an identity matrix, and D is a diagonal matrix. This process is known as


simultaneous diagonalization. If we have the V and W matrices described above such that

I = WTVTXVW,

We can then construct the B matrix by applying this same transformation to the Y matrix:

WTVTYVW = B

We can combine the eigenvalues of B into a transformation matrix Z such that:

ZTBZ = D

We can then define our A matrix as:

A = VWZ
AT = ZTWTVT

This A matrix will satisfy the simultaneous diagonalization proceedure, outlined above.

Covariance Matrix
The Covariance Matrix of two random vectors, X and Y, is defined as:
QX = E[(X − µX)(Y − µY)T] = E[(Y − µY)(X − µX)T]

Where each element of the covariance matrix expresses the variance relationship between
the row element of X, and the column element of Y. The covariance matrix is real
symmetric.

We can relate the correlation matrix and the covariance matrix through the following
formula:

Cumulative Distribution Function


An N-vector X has a cumulative distribution function Fx of N variables that is defined as:

Probability Density Function


The probability density function of a random vector can be defined in terms of the Nth
partial derivative of the cumulative distribution function:

If we know the density function, we can find the mean of the ith element of X using N-1
integrations:
Optimization
Optimization is an important concept in engineering. Finding any solution to a problem
is not nearly as good as finding the one "optimal solution" to the problem. Optimization
problems are typically reformatted so they become minimization problems, which are
well-studied problems in the field of mathematics.

Typically, when optimizing a system, the costs and benefits of that system are arranged
into a cost function. It is the engineers job then to minimize this cost function (and
thereby minimize the cost of the system). It is worth noting at this point that the word
"cost" can have multiple meanings, depending on the particular problem. For instance,
cost can refer to the actual monetary cost of a system (number of computer units to host a
website, amount of cable needed to connect Philadelphia and New York), the delay of the
system (loading time for a website, transmission delay for a communication network), the
reliability of the system (number of dropped calls in a cellphone network, average
lifetime of a car transmission), or any other types of factors that reduce the effectiveness
and efficiency of the system.

Because optimization typically becomes a mathematical minimization problem, we are


going to discuss minimization here.

Minimization

Minimization is the act of finding the numerically lowest point in a given function, or in a
particular range of a given function. Students of mathematics and calculus may remember
using the derivative of a function to find the maxima and minima of a function. If we
have a function f(x), we can find the maxima, minima, or saddle-points (points where the
function has zero slope, but is not a maxima or minima) by solving for x in the following
equation:

In other words, we are looking for the roots of the derivative of the function f. Once we
have the roots of the function (if any), we can test those points to see if they are relatively
high (maxima), or relatively low (minima). Some other words to remember are:

Global Minima: A global minima of a function is the lowest value of that function
anywhere.

Local Minima: A local minima of a function is the lowest value of that function within a
given range A < x < B. If the function derivative has no roots in that range, then the
minima occurs at either A, or B.

We will discuss some other techniques for finding minima below.


Unconstrained Minimization
Unconstrained Minimization refers to the minimization of the given function without
having to worry about any other rules or caveats. Constrained Minimization, on the
other hand, refers to minimization problems where there are other factors or constraints
that must be satisfied.

Besides the method above (where we take the derivative of the function and set that equal
to zero), there are several numerical methods that we can use to find the minima of a
function. These methods are useful when using computational tools such as Matlab.

Hessian Matrix

The function has a local minima at a point x if the Hessian matrix H(x) is positive
definite:

Where x is a vector of all the independant variables of the function. If x is a scalar


variable, the hessian matrix reduces to the second derivative of the function f.

Newton-Raphson Method

The Newton-Raphson Method of computing the minima of a function, f uses an iterative


computation. We can define the scheme:

Where

As we repeat the above equation, plugging in consecutive values for n, our solution will
converge on the true solution. However, this process will take infinitely many iterations
to converge, so oftentimes an approximation of the true solution will suffice.
Steepest Descent Method

The Newton-Raphson method can be tricky because it relies on the second derivative of
the function f, and this can oftentimes be difficult (if not impossible) to accurately
calculate. The Steepest Descent Method, however, does not require the second
derivative, but it does require the selection of an appropriate scalar quantity ε, which
cannot be chosen arbitrarily (but which can also not be calculated using a set formula).
The Steepest Descent method is defined by the following iterative computation:

Where epsilon needs to be sufficiently small. If epsilon is too large, the iteration may
diverge. If this happens, a new epsilon value needs to be chosen, and the process needs to
be repeated.

Constrained Minimization
Constrained Minimization' is the process of finding the minimum value of a function
under a certain number of additional rules or constraints. For instance, we could say
"Find the minium value of f(x), but g(x) must equal 10". These kinds of problems are
difficult, but fortunately we can utilize the Khun-Tucker theorem, and also the
Karush=Khun-Tucker theorem to solve for them.

There are two different types of constraints: equality constraints and inequality
constraints. We will consider them individually, and then we will consider them together.

Equality Constraints

The Khun-Tucker Theorem is a method for minimizing a function f(x) under the equality
constraint g(x). We can define the theorem as follows:

If we have a function f, and an equality constraint g in the following form:

g(x) = 0,

Then we can convert this problem into an unconstrained minimization problem by


constructing the Lagrangian function of f and g:

Where Λ is the lagrangian vector, and < , > denotes the scalar product operation of the
Rn vector space (where n is the number of equality constraints). Λ is the Lagrangian
Multipler vector, with one entry in Λ for each equality constraint on the equation. We
will discuss scalar products more later. If we differentiate this equation with respect to x,
we can find the minimum of this whole function L(x), and that will be the minimum of
our function f.

This is a set of n equations with 2n unknown variables (Λ and x vectors). We can create
additional equations by differentiating with respect to each element of Λ and x.

Inequality Constraints

Similar to the method above, let's say that we have a cost function f, and an inequality
constraint in the following form:

Then we can take the Lagrangian of this again:

But we now must also use the following two equations in determining our solution:

These last two equations can be interpreted in the following way:

if g(x) < 0, then Λ = 0


if , then

Using these two additional equations, we can solve for our minimization answer in a
similar manner as above.

Equality and Inequality Constraints

If we have a set of inequality and equality constraints:

g(x) = 0

We can combine them into a single Lagrangian with two additional conditions:
The last two conditions can be interpreted in the same manner as above to find the
solution.

Infinite Dimension Minimization


The above methods work well if the variables involved in the analysis are finite-
dimensional vectors, especially those in the RN space. However, what if we are trying to
minimize something that is more complex then a vector, such as a function? If we
consider the L2 space, we have an infinite-dimensional space where the members of that
space are all functions. We will define the term functional as follows:

Functional
A functional is a function that takes one or more functions as arguments, and
which returns a scalar value.

Let's say that we have a function x of time t. We can define the functional f as:

f(x(t))

With that function, we can associate a cost function J:

Where we are explicitly taking account of t in the definition of f. To minimize this


function, like all minimization problems, we need to take the derivative of the function,
and set the derivative to zero. However, we are not able to take a standard derivative of J
with respect to x, because x is a function that varies with time. However, we can define a
new type of derivative, the Gateaux Derivative that can handle this special case.

Gateaux Derivative

We can define the Gateaux Derivative in terms of the following limit:

Which is similar to the classical definition of the derivative, except with the inclusion of
the term ε. In english, above we took the derivative of F with respect to x, in the direction
of h. h is an arbitrary function of time, in the same space as x (here we are talking about
the L2 space). We can use the Gateaux derivative to find the minimization of our function
above.
Euler-Lagrange Equation

The Euler-Lagrange Equation uses the Gateaux derivative, discussed above, to find the
minimization of the following types of function:

We want to find the solutions to this problem:

δJ(x) = 0

And the solution is:

The partial derivatives can be done in an ordinary way ignoring the fact that x is a
function of t. Solutions to this equation are either the maxima or minima of the cost
function J.

Example: Shortest Distance

We've heard colloquially that the shortest distance between two points is a straight line.
We can use the Euler-Lagrange equation to prove this rule.

If we have two points in R2 space, a, and b, we would like to find the minimum function
that joins these two points. We can define the differential ds as the differential along the
function that joins points a and b:

Our function that we are trying to minimize then is defined as:

or:
We can take the Gateaux derivative of the function J and set it equal to zero to find the
minimum function between these two points.
Laplace Transform Table
Time Domain Laplace Domain

δ(t) 1

δ(t − a) e − as

u(t)

u(t − a)

tu(t)

tnu(t)

eatu(t)
tneatu(t)

cos(ωt)u(t)

sin(ωt)u(t)

cosh(ωt)u(t)

sinh(ωt)u(t)

eatcos(ωt)u(t)

eatsin(ωt)u(t)
Laplace Transform Table 2

Time domain
Failed to parse
Laplace domain
(Can't write to or Region of
Failed to parse (Can't write to or
create maths convergence
ID Function create maths output directory):
output directory): for causal
X(s) = \mathcal{L}\left\{ x(t)
x(t) = systems
\right\}
\mathcal{L}^{-1}
\left\{ X(s) \right\}

Failed to parse
(Can't write to or Failed to parse (Can't write to or
1 ideal delay create maths create maths output directory):
output directory): e^{-\tau s} \
\delta(t-\tau) \

Failed to
parse (Can't
Failed to parse
write to or
(Can't write to or Failed to parse (Can't write to or
create maths
1a unit impulse create maths create maths output directory):
output
output directory): 1\
directory):
\delta(t) \
\mathrm{all}
\ s \,

Failed to parse
Failed to
(Can't write to or
parse (Can't
delayed nth create maths Failed to parse (Can't write to or
write to or
power with output directory): create maths output directory):
2 create maths
frequency \frac{(t- \frac{e^{-\tau
output
shift \tau)^n}{n!} e^{- s}}{(s+\alpha)^{n+1}}
directory): s
\alpha (t-\tau)}
> 0 \,
\cdot u(t-\tau)

Failed to parse Failed to


2a nth power Failed to parse (Can't write to or
(Can't write to or parse (Can't
create maths output directory): {
create maths write to or
output directory): { 1 \over s^{n+1} } create maths
t^n \over n! } \cdot output
u(t) directory): s
> 0 \,

Failed to parse Failed to


(Can't write to or parse (Can't
create maths Failed to parse (Can't write to or write to or
2a.1 qth power output directory): { create maths output directory): { create maths
t^q \over 1 \over s^{q+1} } output
\Gamma(q+1) } directory): s
\cdot u(t) > 0 \,

Failed to
Failed to parse parse (Can't
(Can't write to or Failed to parse (Can't write to or write to or
2a.2 unit step create maths create maths output directory): { create maths
output directory): 1 \over s } output
u(t) \ directory): s
> 0 \,

Failed to
Failed to parse parse (Can't
(Can't write to or Failed to parse (Can't write to or write to or
delayed unit
2b create maths create maths output directory): { create maths
step
output directory): e^{-\tau s} \over s } output
u(t-\tau) \ directory): s
> 0 \,

Failed to
Failed to parse parse (Can't
(Can't write to or Failed to parse (Can't write to or write to or
2c ramp create maths create maths output directory): create maths
output directory): t \frac{1}{s^2} output
\cdot u(t)\ directory): s
> 0 \,

nth power Failed to parse (Can't write to or Failed to


Failed to parse
2d
with (Can't write to or create maths output directory): parse (Can't
frequency \frac{1}{(s+\alpha)^{n+1}}
create maths write to or
shift output directory): create maths
\frac{t^{n}}{n!}e^{- output
\alpha t} \cdot u(t) directory): s
> - \alpha \,

Failed to
Failed to parse
parse (Can't
(Can't write to or
Failed to parse (Can't write to or write to or
exponential create maths
2d.1 create maths output directory): { create maths
decay output directory):
1 \over s+\alpha } output
e^{-\alpha t} \cdot
directory): s
u(t) \
> - \alpha \

Failed to
Failed to parse
parse (Can't
(Can't write to or
Failed to parse (Can't write to or write to or
exponential create maths
3 create maths output directory): create maths
approach output directory): (
\frac{\alpha}{s(s+\alpha)} output
1-e^{-\alpha t})
directory): s
\cdot u(t) \
> 0\

Failed to
Failed to parse
parse (Can't
(Can't write to or
Failed to parse (Can't write to or write to or
create maths
4 sine create maths output directory): { create maths
output directory):
\omega \over s^2 + \omega^2 } output
\sin(\omega t) \cdot
directory): s
u(t) \
>0\

Failed to
Failed to parse
parse (Can't
(Can't write to or
Failed to parse (Can't write to or write to or
create maths
5 cosine create maths output directory): { create maths
output directory):
s \over s^2 + \omega^2 } output
\cos(\omega t) \cdot
directory): s
u(t) \
>0\

Failed to parse (Can't write to or Failed to


hyperbolic Failed to parse
6 create maths output directory): { parse (Can't
sine (Can't write to or
\alpha \over s^2 - \alpha^2 }
create maths write to or
output directory): create maths
\sinh(\alpha t) \cdot output
u(t) \ directory): s
> | \alpha | \

Failed to
Failed to parse
parse (Can't
(Can't write to or
Failed to parse (Can't write to or write to or
hyperbolic create maths
7 create maths output directory): { create maths
cosine output directory):
s \over s^2 - \alpha^2 } output
\cosh(\alpha t) \cdot
directory): s
u(t) \
> | \alpha | \

Failed to parse Failed to


(Can't write to or parse (Can't
Failed to parse (Can't write to or
Exponentially- create maths write to or
create maths output directory): {
8 decaying output directory): create maths
\omega \over (s+\alpha )^2 +
sine wave e^{-\alpha t} output
\omega^2 }
\sin(\omega t) \cdot directory): s
u(t) \ > -\alpha \

Failed to parse Failed to


(Can't write to or parse (Can't
Failed to parse (Can't write to or
Exponentially- create maths write to or
create maths output directory): {
9 decaying output directory): create maths
s+\alpha \over (s+\alpha )^2 +
cosine wave e^{-\alpha t} output
\omega^2 }
\cos(\omega t) \cdot directory): s
u(t) \ > -\alpha \

Failed to
Failed to parse
parse (Can't
(Can't write to or Failed to parse (Can't write to or
write to or
create maths create maths output directory):
10 nth root create maths
output directory): s^{-(n+1)/n} \cdot
output
\sqrt[n]{t} \cdot \Gamma\left(1+\frac{1}{n}\right)
directory): s
u(t)
> 0 \,

natural Failed to parse Failed to parse (Can't write to or Failed to


11
logarithm (Can't write to or create maths output directory): - parse (Can't
create maths { t_0 \over s} \ [ \ \ln(t_0 write to or
output directory): s)+\gamma \ ] create maths
\ln \left ( { t \over output
t_0 } \right ) \cdot directory): s
u(t) > 0 \,

Failed to
parse (Can't
write to or
create maths
output
Failed to parse Failed to parse (Can't write to or directory): s
Bessel
(Can't write to or create maths output directory):
function > 0 \,
create maths \frac{ \omega^n
12 of the first
output directory): \left(s+\sqrt{s^2+
kind,
J_n( \omega t) \cdot \omega^2}\right)^{- Failed to
of order n parse
u(t) n}}{\sqrt{s^2 + \omega^2}}
(Can't
write to or
create
maths
output
directory):
(n > -1) \,
Failed to
Modified Failed to parse Failed to parse (Can't write to or
parse (Can't
Bessel (Can't write to or create maths output directory):
write to or
function create maths \frac{ \omega^n
13 create maths
of the first output directory): \left(s+\sqrt{s^2-
output
kind, I_n(\omega t) \cdot \omega^2}\right)^{-
directory): s
of order n u(t) n}}{\sqrt{s^2-\omega^2}}
> | \omega | \,
Failed to parse
Bessel
(Can't write to or
function
create maths
14 of the second
output directory):
kind,
Y_0(\alpha t) \cdot
of order 0
u(t)
Modified Failed to parse
Bessel (Can't write to or
function create maths
15
of the second output directory):
kind, K_0(\alpha t) \cdot
of order 0 u(t)
Failed to parse Failed to parse (Can't write to or Failed to
16 Error function (Can't write to or create maths output directory): parse (Can't
create maths {e^{s^2/4} \operatorname{erfc} write to or
output directory): \left(s/2\right) \over s} create maths
\mathrm{erf}(t) output
\cdot u(t) directory): s
> 0 \,
Explanatory notes:
• Failed to parse • Failed to parse (Can't write to or create maths
(Can't write to or output directory): t \,
create maths output
directory): u(t) \, , a real number, typically represents time,
although it can represent any independent dimension.
represents the
Heaviside step
• Failed to parse (Can't write to or create maths
function.
output directory): s \,
• Failed to parse is the complex angular frequency.
(Can't write to or
create maths output • Failed to parse (Can't write to or create maths
directory): \delta(t) output directory): \alpha \,
\,

represents the Dirac


, Failed to parse (Can't write to or create maths output
delta function. directory): \beta \, , Failed to parse (Can't write to or
create maths output directory): \tau \, , and Failed to
• Failed to parse parse (Can't write to or create maths output directory):
(Can't write to or \omega \,
create maths output
are real numbers.
directory): \Gamma
(z) \,
• Failed to parse (Can't write to or create maths
represents the Gamma
function. output directory): n \,

• Failed to parse is an integer.


(Can't write to or
create maths output
directory): \gamma
\,

is the Euler-Mascheroni
constant.

• A causal system is a system where the impulse response h(t) is zero for all time t prior
to t = 0. In general, the ROC for causal systems is not the same as the ROC for
anticausal systems. See also causality.
Laplace Transform Properties

Property Definition

Linearity

Differentiation

Frequency Division

Frequency
Integration

Time Integration

Scaling

Initial value theorem

Final value theorem

Frequency Shifts

Time Shifts

Convolution
Theorem

Where:
s = σ + jω

Fourier Transform Table


Time Domain Fourier Domain

1 2πδ(ω)

− 0.5 + u(t)

δ(t) 1

δ(t − c) e − jωc

u(t)

e − btu(t)

cosω0t π[δ(ω + ω0) + δ(ω − ω0)]

cos(ω0t + θ) π[e − jθδ(ω + ω0) + ejθδ(ω − ω0)]


sinω0t jπ[δ(ω + ω0) − δ(ω − ω0)]

sin(ω0t + θ) jπ[e − jθδ(ω + ω0) − ejθδ(ω − ω0)]

2πpτ(ω)

Note: sinc(x) = sin(x) / x ; pτ(t) is the rectangular pulse function of width τ

Fourier Transform Table 2

Fourier transform Fourier transform


Signal unitary, angular unitary, ordinary Remarks
frequency frequency
The
rectangular
10 pulse and the
normalized
sinc function

Dual of rule
10. The
rectangular
function is an
idealized
low-pass
11 filter, and the
sinc function
is the non-
causal
impulse
response of
such a filter.

tri is the
12 triangular
function

Dual of rule
13
12.

Shows that
the Gaussian
function exp(
− αt2) is its
own Fourier
14
transform.
For this to be
integrable we
must have
Re(α) > 0.
common in
optics

a>0

the transform
is the
function itself

J0(t) is the
Bessel
function of
first kind of
order 0, rect
is the
rectangular
function

it's the
generalizatio
n of the
previous
transform; Tn
(t) is the
Chebyshev
polynomial
of the first
kind.
Un (t) is the
Chebyshev
polynomial
of the second
kind

Fourier Transform Properties

Fourier transform Fourier transform


Signal unitary, angular unitary, ordinary Remarks
frequency frequency

1 Linearity

Shift in time
2
domain

Shift in frequency
3
domain, dual of 2

If is large, then
is
4 concentrated
around 0 and

spreads
out and flattens

Duality property of
the Fourier
transform. Results
5
from swapping
"dummy" variables
of and .

Generalized
derivative property
6
of the Fourier
transform

7 This is the dual to 6

denotes the
convolution of
8 and — this rule
is the convolution
theorem

9 This is the dual of 8

DTFT Transform Table

Time domain Frequency domain


Remarks
integer k

real number a

real number a

real number a

integer M

real number a

real number W

real numbers W, a

real numbers W, a
real numbers A, B
complex C

DTFT Transform Properties

Time domain Frequency domain


Property Remarks

Linearity

Shift in time integer k

Shift in real
frequency number a

Time
reversal

Time
conjugation

Time
reversal &
conjugation

Derivative
in frequency
Integral in
frequency

Convolve in
time

Multiply in
time

Correlation

Where:

• is the convolution between two signals


• is the complex conjugate of the function x[n]
• represents the correlation between x[n] and y[n].

DFT Transform Table

Time-Domain Frequency Domain


Notes
x[n] X[k]

DFT Definition

Shift theorem
Real DFT

Z Transform Table

Signal, x[n] Z-transform, X(z) ROC

5
6

10

11

Z Transform Properties

Time domain Z-domain ROC

Notation ROC:

At least the
Linearity intersection of ROC1
and ROC2
ROC, except if
Time
and if
shifting

Scaling in
the z-
domain

Time
reversal

Conjugat
ROC
ion

Real part ROC

Imaginar
ROC
y part

Differenti
ROC
ation

At least the
Convolut
intersection of ROC1
ion
and ROC2

At least the
Correlati
intersection of ROC of
on
X1(z) and X2(z − 1)
Multiplic At least
ation

Parseval'
s relation

• Inital value theorem

, If causal

• Final value theorem

, Only if poles of are inside unit


circle

Hilbert Transform Table


Signal Hilbert transform

Sinc function
Rectangular function

δ(t)
Dirac delta function
Properties of Integrals
Property Integral

Homogeniety

Associativity

Integration by Parts
:

Table of Integrals
This is a small summary of the identities found Here.

Integral

3
4

10

11

12

13
14

15

16

17

18

19

20

21

22

23

24
25

26

27

28

29

30

31

32

Properties of Derivatives
Properties of Derivation
Product Rule

Quotient rule

Functional Power Rule

Chain Rule

Logarithm Rule

Table of Derivatives
Table of Derivatives
where both xc and cxc-1 are defined.

x>0

c > 0</math>

c > 0,
Trigonometric Identities
sin2 + cos2 = 1 1 + tan2 = sec2

sin( − θ) = − sinθ cos( − θ) = sinθ

sin2θ = 2sinθcosθ cos2θ = cos2 − sin2 = 2cos2θ − 1 = 1 − 2sin2θ


1 + cot2 = csc2

ejθ = cosθ + jsinθ

tan( − θ) = cotθ
Normal Distribution
The normal distibution is an extremely important family of continuous probability
distributions. It has applications in every engineering discipline. Each member of the
family may be defined by two parameters, location and scale: the mean ("average", μ)
and variance (standard deviation squared, σ2) respectively.

The probability density function, or pdf, of the distribution is given by:

The cumulative distibution function, or cdf, of the normal distribution is:

These functions are often inpractical to evaluate quickly, and therefore tables of values
are used to allow fast look-up of required data. The family or normal distibutions is
infinite in size, but all can be "normalised" to the case with mean of 0 and SD of 1:

Given a normal distibution , the standardised normal


distribution, Z, is:

Due to this relationship, all tables refer to the standardised distibution, Z.

Probability Content from –∞ to Z (Z≤0)

Table of Probability Content between –∞ and z in the


Standardised Normal Distribution Z~N(0,1) for z≤0

z 0.0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09
0.0 0.50000 0.49601 0.49202 0.48803 0.48405 0.48006 0.47608 0.47210 0.46812 0.46414

-
0.46017 0.45620 0.45224 0.44828 0.44433 0.44038 0.43644 0.43251 0.42858 0.42465
0.1

-
0.42074 0.41683 0.41294 0.40905 0.40517 0.40129 0.39743 0.39358 0.38974 0.38591
0.2

-
0.38209 0.37828 0.37448 0.37070 0.36693 0.36317 0.35942 0.35569 0.35197 0.34827
0.3

-
0.34458 0.34090 0.33724 0.33360 0.32997 0.32636 0.32276 0.31918 0.31561 0.31207
0.4

-
0.30854 0.30503 0.30153 0.29806 0.29460 0.29116 0.28774 0.28434 0.28096 0.27760
0.5

-
0.27425 0.27093 0.26763 0.26435 0.26109 0.25785 0.25463 0.25143 0.24825 0.24510
0.6

-
0.24196 0.23885 0.23576 0.23270 0.22965 0.22663 0.22363 0.22065 0.21770 0.21476
0.7

-
0.21186 0.20897 0.20611 0.20327 0.20045 0.19766 0.19489 0.19215 0.18943 0.18673
0.8

-
0.18406 0.18141 0.17879 0.17619 0.17361 0.17106 0.16853 0.16602 0.16354 0.16109
0.9

-
0.15866 0.15625 0.15386 0.15151 0.14917 0.14686 0.14457 0.14231 0.14007 0.13786
1.0
-
0.13567 0.13350 0.13136 0.12924 0.12714 0.12507 0.12302 0.12100 0.11900 0.11702
1.1

-
0.11507 0.11314 0.11123 0.10935 0.10749 0.10565 0.10383 0.10204 0.10027 0.09853
1.2

-
0.09680 0.09510 0.09342 0.09176 0.09012 0.08851 0.08691 0.08534 0.08379 0.08226
1.3

-
0.08076 0.07927 0.07780 0.07636 0.07493 0.07353 0.07215 0.07078 0.06944 0.06811
1.4

-
0.06681 0.06552 0.06426 0.06301 0.06178 0.06057 0.05938 0.05821 0.05705 0.05592
1.5

-
0.05480 0.05370 0.05262 0.05155 0.05050 0.04947 0.04846 0.04746 0.04648 0.04551
1.6

-
0.04457 0.04363 0.04272 0.04182 0.04093 0.04006 0.03920 0.03836 0.03754 0.03673
1.7

-
0.03593 0.03515 0.03438 0.03362 0.03288 0.03216 0.03144 0.03074 0.03005 0.02938
1.8

-
0.02872 0.02807 0.02743 0.02680 0.02619 0.02559 0.02500 0.02442 0.02385 0.02330
1.9

-
0.02275 0.02222 0.02169 0.02118 0.02068 0.02018 0.01970 0.01923 0.01876 0.01831
2.0

-
0.01786 0.01743 0.01700 0.01659 0.01618 0.01578 0.01539 0.01500 0.01463 0.01426
2.1
-
0.01390 0.01355 0.01321 0.01287 0.01255 0.01222 0.01191 0.01160 0.01130 0.01101
2.2

-
0.01072 0.01044 0.01017 0.00990 0.00964 0.00939 0.00914 0.00889 0.00866 0.00842
2.3

-
0.00820 0.00798 0.00776 0.00755 0.00734 0.00714 0.00695 0.00676 0.00657 0.00639
2.4

-
0.00621 0.00604 0.00587 0.00570 0.00554 0.00539 0.00523 0.00508 0.00494 0.00480
2.5

-
0.00466 0.00453 0.00440 0.00427 0.00415 0.00402 0.00391 0.00379 0.00368 0.00357
2.6

-
0.00347 0.00336 0.00326 0.00317 0.00307 0.00298 0.00289 0.00280 0.00272 0.00264
2.7

-
0.00256 0.00248 0.00240 0.00233 0.00226 0.00219 0.00212 0.00205 0.00199 0.00193
2.8

-
0.00187 0.00181 0.00175 0.00169 0.00164 0.00159 0.00154 0.00149 0.00144 0.00139
2.9

-
0.00135 0.00131 0.00126 0.00122 0.00118 0.00114 0.00111 0.00107 0.00104 0.00100
3.0

-
0.00097 0.00094 0.00090 0.00087 0.00084 0.00082 0.00079 0.00076 0.00074 0.00071
3.1

-
0.00069 0.00066 0.00064 0.00062 0.00060 0.00058 0.00056 0.00054 0.00052 0.00050
3.2
-
0.00048 0.00047 0.00045 0.00043 0.00042 0.00040 0.00039 0.00038 0.00036 0.00035
3.3

-
0.00034 0.00032 0.00031 0.00030 0.00029 0.00028 0.00027 0.00026 0.00025 0.00024
3.4

-
0.00023 0.00022 0.00022 0.00021 0.00020 0.00019 0.00019 0.00018 0.00017 0.00017
3.5

-
0.00016 0.00015 0.00015 0.00014 0.00014 0.00013 0.00013 0.00012 0.00012 0.00011
3.6

-
0.00011 0.00010 0.00010 0.00010 0.00009 0.00009 0.00008 0.00008 0.00008 0.00008
3.7

-
0.00007 0.00007 0.00007 0.00006 0.00006 0.00006 0.00006 0.00005 0.00005 0.00005
3.8

-
0.00005 0.00005 0.00004 0.00004 0.00004 0.00004 0.00004 0.00004 0.00003 0.00003
3.9

-
0.00003 0.00003 0.00003 0.00003 0.00003 0.00003 0.00002 0.00002 0.00002 0.00002
4.0

Probability Content from –∞ to Z (Z≥0)

Table of Probability Content between –∞ and z in the


Standardised Normal Distribution Z~N(0,1) for z≥0
Z 0.0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09

0 0.50000 0.50399 0.50798 0.51197 0.51595 0.51994 0.52392 0.52790 0.53188 0.53586

0.1 0.53983 0.54380 0.54776 0.55172 0.55567 0.55962 0.56356 0.56749 0.57142 0.57535

0.2 0.57926 0.58317 0.58706 0.59095 0.59483 0.59871 0.60257 0.60642 0.61026 0.61409

0.3 0.61791 0.62172 0.62552 0.62930 0.63307 0.63683 0.64058 0.64431 0.64803 0.65173

0.4 0.65542 0.65910 0.66276 0.66640 0.67003 0.67364 0.67724 0.68082 0.68439 0.68793

0.5 0.69146 0.69497 0.69847 0.70194 0.70540 0.70884 0.71226 0.71566 0.71904 0.72240

0.6 0.72575 0.72907 0.73237 0.73565 0.73891 0.74215 0.74537 0.74857 0.75175 0.75490

0.7 0.75804 0.76115 0.76424 0.76730 0.77035 0.77337 0.77637 0.77935 0.78230 0.78524

0.8 0.78814 0.79103 0.79389 0.79673 0.79955 0.80234 0.80511 0.80785 0.81057 0.81327

0.9 0.81594 0.81859 0.82121 0.82381 0.82639 0.82894 0.83147 0.83398 0.83646 0.83891

1.0 0.84134 0.84375 0.84614 0.84849 0.85083 0.85314 0.85543 0.85769 0.85993 0.86214

1.1 0.86433 0.86650 0.86864 0.87076 0.87286 0.87493 0.87698 0.87900 0.88100 0.88298

1.2 0.88493 0.88686 0.88877 0.89065 0.89251 0.89435 0.89617 0.89796 0.89973 0.90147

1.3 0.90320 0.90490 0.90658 0.90824 0.90988 0.91149 0.91309 0.91466 0.91621 0.91774
1.4 0.91924 0.92073 0.92220 0.92364 0.92507 0.92647 0.92785 0.92922 0.93056 0.93189

1.5 0.93319 0.93448 0.93574 0.93699 0.93822 0.93943 0.94062 0.94179 0.94295 0.94408

1.6 0.94520 0.94630 0.94738 0.94845 0.94950 0.95053 0.95154 0.95254 0.95352 0.95449

1.7 0.95543 0.95637 0.95728 0.95818 0.95907 0.95994 0.96080 0.96164 0.96246 0.96327

1.8 0.96407 0.96485 0.96562 0.96638 0.96712 0.96784 0.96856 0.96926 0.96995 0.97062

1.9 0.97128 0.97193 0.97257 0.97320 0.97381 0.97441 0.97500 0.97558 0.97615 0.97670

2.0 0.97725 0.97778 0.97831 0.97882 0.97932 0.97982 0.98030 0.98077 0.98124 0.98169

2.1 0.98214 0.98257 0.98300 0.98341 0.98382 0.98422 0.98461 0.98500 0.98537 0.98574

2.2 0.98610 0.98645 0.98679 0.98713 0.98745 0.98778 0.98809 0.98840 0.98870 0.98899

2.3 0.98928 0.98956 0.98983 0.99010 0.99036 0.99061 0.99086 0.99111 0.99134 0.99158

2.4 0.99180 0.99202 0.99224 0.99245 0.99266 0.99286 0.99305 0.99324 0.99343 0.99361

2.5 0.99379 0.99396 0.99413 0.99430 0.99446 0.99461 0.99477 0.99492 0.99506 0.99520

2.6 0.99534 0.99547 0.99560 0.99573 0.99585 0.99598 0.99609 0.99621 0.99632 0.99643

2.7 0.99653 0.99664 0.99674 0.99683 0.99693 0.99702 0.99711 0.99720 0.99728 0.99736

2.8 0.99744 0.99752 0.99760 0.99767 0.99774 0.99781 0.99788 0.99795 0.99801 0.99807
2.9 0.99813 0.99819 0.99825 0.99831 0.99836 0.99841 0.99846 0.99851 0.99856 0.99861

3.0 0.99865 0.99869 0.99874 0.99878 0.99882 0.99886 0.99889 0.99893 0.99896 0.99900

3.1 0.99903 0.99906 0.99910 0.99913 0.99916 0.99918 0.99921 0.99924 0.99926 0.99929

3.2 0.99931 0.99934 0.99936 0.99938 0.99940 0.99942 0.99944 0.99946 0.99948 0.99950

3.3 0.99952 0.99953 0.99955 0.99957 0.99958 0.99960 0.99961 0.99962 0.99964 0.99965

3.4 0.99966 0.99968 0.99969 0.99970 0.99971 0.99972 0.99973 0.99974 0.99975 0.99976

3.5 0.99977 0.99978 0.99978 0.99979 0.99980 0.99981 0.99981 0.99982 0.99983 0.99983

3.6 0.99984 0.99985 0.99985 0.99986 0.99986 0.99987 0.99987 0.99988 0.99988 0.99989

3.7 0.99989 0.99990 0.99990 0.99990 0.99991 0.99991 0.99992 0.99992 0.99992 0.99992

3.8 0.99993 0.99993 0.99993 0.99994 0.99994 0.99994 0.99994 0.99995 0.99995 0.99995

3.9 0.99995 0.99995 0.99996 0.99996 0.99996 0.99996 0.99996 0.99996 0.99997 0.99997

4.0 0.99997 0.99997 0.99997 0.99997 0.99997 0.99997 0.99998 0.99998 0.99998 0.99998

Far-Right Tail Probability Content


Table of Probability Content
between z and +∞ in the
Standardised Normal
Distribution Z~N(0,1) for z>2

Z P(Z>z) z P(Z>z) z P(Z>z) z P(Z>z)

2.0 0.02275 3.0 0.001350 4.0 0.00003167 5.0 2.867 E-7

2.1 0.01786 3.1 0.0009676 4.1 0.00002066 5.5 1.899 E-8

2.2 0.01390 3.2 0.0006871 4.2 0.00001335 6.0 9.866 E-10

2.3 0.01072 3.3 0.0004834 4.3 0.00000854 6.5 4.016 E-11

2.4 0.00820 3.4 0.0003369 4.4 0.000005413 7.0 1.280 E-12

2.5 0.00621 3.5 0.0002326 4.5 0.000003398 7.5 3.191 E-14

2.6 0.004661 3.6 0.0001591 4.6 0.000002112 8.0 6.221 E-16

2.7 0.003467 3.7 0.0001078 4.7 0.000001300 8.5 9.480 E-18

2.8 0.002555 3.8 0.00007235 4.8 7.933 E-7 9.0 1.129 E-19

2.9 0.001866 3.9 0.00004810 4.9 4.792 E-7 9.5 1.049 E-21

Student's T-Distribution
Table of Critical Values, tα,ν, in a Student T-Distribution with ν
degrees of freedom and a confidence limit p where α=1–p.

Confidence Limits (top) and α (bottom) for a One-Tailed Test.

ν 60% 75% 80% 85% 90% 95% 97.5% 98% 99% 99.5% 99.75% 99.9% 99.95%

0.4 0.25 0.2 0.15 0.1 0.05 0.025 0.02 0.01 0.005 0.0025 0.001 0.0005

1 0.32492 1.00000 1.37638 1.96261 3.07768 6.31375 12.70620 15.89454 31.82052 63.65674 127.32134 318.30884 636.61925

2 0.28868 0.81650 1.06066 1.38621 1.88562 2.91999 4.30265 4.84873 6.96456 9.92484 14.08905 22.32712 31.59905

3 0.27667 0.76489 0.97847 1.24978 1.63774 2.35336 3.18245 3.48191 4.54070 5.84091 7.45332 10.21453 12.92398

4 0.27072 0.74070 0.94096 1.18957 1.53321 2.13185 2.77645 2.99853 3.74695 4.60409 5.59757 7.17318 8.61030

5 0.26718 0.72669 0.91954 1.15577 1.47588 2.01505 2.57058 2.75651 3.36493 4.03214 4.77334 5.89343 6.86883

6 0.26483 0.71756 0.90570 1.13416 1.43976 1.94318 2.44691 2.61224 3.14267 3.70743 4.31683 5.20763 5.95882

7 0.26317 0.71114 0.89603 1.11916 1.41492 1.89458 2.36462 2.51675 2.99795 3.49948 4.02934 4.78529 5.40788

8 0.26192 0.70639 0.88889 1.10815 1.39682 1.85955 2.30600 2.44898 2.89646 3.35539 3.83252 4.50079 5.04131

9 0.26096 0.70272 0.88340 1.09972 1.38303 1.83311 2.26216 2.39844 2.82144 3.24984 3.68966 4.29681 4.78091

10 0.26018 0.69981 0.87906 1.09306 1.37218 1.81246 2.22814 2.35931 2.76377 3.16927 3.58141 4.14370 4.58689

11 0.25956 0.69745 0.87553 1.08767 1.36343 1.79588 2.20099 2.32814 2.71808 3.10581 3.49661 4.02470 4.43698
12 0.25903 0.69548 0.87261 1.08321 1.35622 1.78229 2.17881 2.30272 2.68100 3.05454 3.42844 3.92963 4.31779

13 0.25859 0.69383 0.87015 1.07947 1.35017 1.77093 2.16037 2.28160 2.65031 3.01228 3.37247 3.85198 4.22083

14 0.25821 0.69242 0.86805 1.07628 1.34503 1.76131 2.14479 2.26378 2.62449 2.97684 3.32570 3.78739 4.14045

15 0.25789 0.69120 0.86624 1.07353 1.34061 1.75305 2.13145 2.24854 2.60248 2.94671 3.28604 3.73283 4.07277

16 0.25760 0.69013 0.86467 1.07114 1.33676 1.74588 2.11991 2.23536 2.58349 2.92078 3.25199 3.68615 4.01500

17 0.25735 0.68920 0.86328 1.06903 1.33338 1.73961 2.10982 2.22385 2.56693 2.89823 3.22245 3.64577 3.96513

18 0.25712 0.68836 0.86205 1.06717 1.33039 1.73406 2.10092 2.21370 2.55238 2.87844 3.19657 3.61048 3.92165

19 0.25692 0.68762 0.86095 1.06551 1.32773 1.72913 2.09302 2.20470 2.53948 2.86093 3.17372 3.57940 3.88341

20 0.25674 0.68695 0.85996 1.06402 1.32534 1.72472 2.08596 2.19666 2.52798 2.84534 3.15340 3.55181 3.84952

21 0.25658 0.68635 0.85907 1.06267 1.32319 1.72074 2.07961 2.18943 2.51765 2.83136 3.13521 3.52715 3.81928

22 0.25643 0.68581 0.85827 1.06145 1.32124 1.71714 2.07387 2.18289 2.50832 2.81876 3.11882 3.50499 3.79213

23 0.25630 0.68531 0.85753 1.06034 1.31946 1.71387 2.06866 2.17696 2.49987 2.80734 3.10400 3.48496 3.76763

24 0.25617 0.68485 0.85686 1.05932 1.31784 1.71088 2.06390 2.17154 2.49216 2.79694 3.09051 3.46678 3.74540

25 0.25606 0.68443 0.85624 1.05838 1.31635 1.70814 2.05954 2.16659 2.48511 2.78744 3.07820 3.45019 3.72514

26 0.25595 0.68404 0.85567 1.05752 1.31497 1.70562 2.05553 2.16203 2.47863 2.77871 3.06691 3.43500 3.70661

27 0.25586 0.68368 0.85514 1.05673 1.31370 1.70329 2.05183 2.15782 2.47266 2.77068 3.05652 3.42103 3.68959
28 0.25577 0.68335 0.85465 1.05599 1.31253 1.70113 2.04841 2.15393 2.46714 2.76326 3.04693 3.40816 3.67391

29 0.25568 0.68304 0.85419 1.05530 1.31143 1.69913 2.04523 2.15033 2.46202 2.75639 3.03805 3.39624 3.65941

30 0.25561 0.68276 0.85377 1.05466 1.31042 1.69726 2.04227 2.14697 2.45726 2.75000 3.02980 3.38518 3.64596

40 0.25504 0.68067 0.85070 1.05005 1.30308 1.68385 2.02108 2.12291 2.42326 2.70446 2.97117 3.30688 3.55097

50 0.25470 0.67943 0.84887 1.04729 1.29871 1.67591 2.00856 2.10872 2.40327 2.67779 2.93696 3.26141 3.49601

60 0.25447 0.67860 0.84765 1.04547 1.29582 1.67065 2.00030 2.09936 2.39012 2.66028 2.91455 3.23171 3.46020

70 0.25431 0.67801 0.84679 1.04417 1.29376 1.66691 1.99444 2.09273 2.38081 2.64790 2.89873 3.21079 3.43501

80 0.25419 0.67757 0.84614 1.04320 1.29222 1.66412 1.99006 2.08778 2.37387 2.63869 2.88697 3.19526 3.41634

90 0.25410 0.67723 0.84563 1.04244 1.29103 1.66196 1.98667 2.08394 2.36850 2.63157 2.87788 3.18327 3.40194

100 0.25402 0.67695 0.84523 1.04184 1.29007 1.66023 1.98397 2.08088 2.36422 2.62589 2.87065 3.17374 3.39049

500 0.25348 0.67498 0.84234 1.03751 1.28325 1.64791 1.96472 2.05912 2.33383 2.58570 2.81955 3.10661 3.31009

1000 0.25341 0.67474 0.84198 1.03697 1.28240 1.64638 1.96234 2.05643 2.33008 2.58075 2.81328 3.09840 3.30028

∞ 0.25335 0.67449 0.84162 1.03643 1.28155 1.64485 1.95996 2.05375 2.32635 2.57583 2.80703 3.09023 3.29053

Explanatory Notes

• For a Two-Tailed Test, use the α here that corresponds to half the two-tailed α.
o For example if a two-tailed confidence limit of 90% is desired (α=0.1), use a one-tailed α from this table of 0.05
• In the limit ν=∞, this distribution is equivalent to a normal distribution X~N(0,1)

Chi-Squared Distibution
Table of
values of χ2 in
a Chi-Squared
Distribution
with k degrees
of freedom
such that p is
the area
between χ2
and +∞

Probability Content, p, between χ2 and +∞

0.995 0.99 0.975 0.95 0.9 0.75 0.5 0.25 0.1 0.05 0.025 0.01 0.005 0.002 0.001

3.927e- 1.570e- 9.820e-


1 0.00393 0.0157 0.102 0.455 1.323 2.706 3.841 5.024 6.635 7.879 9.550 10.828
5 4 4

2 0.0100 0.0201 0.0506 0.103 0.211 0.575 1.386 2.773 4.605 5.991 7.378 9.210 10.597 12.429 13.816

3 0.0717 0.115 0.216 0.352 0.584 1.213 2.366 4.108 6.251 7.815 9.348 11.345 12.838 14.796 16.266

4 0.207 0.297 0.484 0.711 1.064 1.923 3.357 5.385 7.779 9.488 11.143 13.277 14.860 16.924 18.467

5 0.412 0.554 0.831 1.145 1.610 2.675 4.351 6.626 9.236 11.070 12.833 15.086 16.750 18.907 20.515

6 0.676 0.872 1.237 1.635 2.204 3.455 5.348 7.841 10.645 12.592 14.449 16.812 18.548 20.791 22.458

7 0.989 1.239 1.690 2.167 2.833 4.255 6.346 9.037 12.017 14.067 16.013 18.475 20.278 22.601 24.322

8 1.344 1.646 2.180 2.733 3.490 5.071 7.344 10.219 13.362 15.507 17.535 20.090 21.955 24.352 26.124

9 1.735 2.088 2.700 3.325 4.168 5.899 8.343 11.389 14.684 16.919 19.023 21.666 23.589 26.056 27.877

10 2.156 2.558 3.247 3.940 4.865 6.737 9.342 12.549 15.987 18.307 20.483 23.209 25.188 27.722 29.588

11 2.603 3.053 3.816 4.575 5.578 7.584 10.341 13.701 17.275 19.675 21.920 24.725 26.757 29.354 31.264

12 3.074 3.571 4.404 5.226 6.304 8.438 11.340 14.845 18.549 21.026 23.337 26.217 28.300 30.957 32.909

13 3.565 4.107 5.009 5.892 7.042 9.299 12.340 15.984 19.812 22.362 24.736 27.688 29.819 32.535 34.528
14 4.075 4.660 5.629 6.571 7.790 10.165 13.339 17.117 21.064 23.685 26.119 29.141 31.319 34.091 36.123

15 4.601 5.229 6.262 7.261 8.547 11.037 14.339 18.245 22.307 24.996 27.488 30.578 32.801 35.628 37.697

16 5.142 5.812 6.908 7.962 9.312 11.912 15.338 19.369 23.542 26.296 28.845 32.000 34.267 37.146 39.252

17 5.697 6.408 7.564 8.672 10.085 12.792 16.338 20.489 24.769 27.587 30.191 33.409 35.718 38.648 40.790

18 6.265 7.015 8.231 9.390 10.865 13.675 17.338 21.605 25.989 28.869 31.526 34.805 37.156 40.136 42.312

19 6.844 7.633 8.907 10.117 11.651 14.562 18.338 22.718 27.204 30.144 32.852 36.191 38.582 41.610 43.820

20 7.434 8.260 9.591 10.851 12.443 15.452 19.337 23.828 28.412 31.410 34.170 37.566 39.997 43.072 45.315

21 8.034 8.897 10.283 11.591 13.240 16.344 20.337 24.935 29.615 32.671 35.479 38.932 41.401 44.522 46.797

22 8.643 9.542 10.982 12.338 14.041 17.240 21.337 26.039 30.813 33.924 36.781 40.289 42.796 45.962 48.268

23 9.260 10.196 11.689 13.091 14.848 18.137 22.337 27.141 32.007 35.172 38.076 41.638 44.181 47.391 49.728

24 9.886 10.856 12.401 13.848 15.659 19.037 23.337 28.241 33.196 36.415 39.364 42.980 45.559 48.812 51.179

25 10.520 11.524 13.120 14.611 16.473 19.939 24.337 29.339 34.382 37.652 40.646 44.314 46.928 50.223 52.620

26 11.160 12.198 13.844 15.379 17.292 20.843 25.336 30.435 35.563 38.885 41.923 45.642 48.290 51.627 54.052

27 11.808 12.879 14.573 16.151 18.114 21.749 26.336 31.528 36.741 40.113 43.195 46.963 49.645 53.023 55.476

28 12.461 13.565 15.308 16.928 18.939 22.657 27.336 32.620 37.916 41.337 44.461 48.278 50.993 54.411 56.892

29 13.121 14.256 16.047 17.708 19.768 23.567 28.336 33.711 39.087 42.557 45.722 49.588 52.336 55.792 58.301

30 13.787 14.953 16.791 18.493 20.599 24.478 29.336 34.800 40.256 43.773 46.979 50.892 53.672 57.167 59.703
31 14.458 15.655 17.539 19.281 21.434 25.390 30.336 35.887 41.422 44.985 48.232 52.191 55.003 58.536 61.098

32 15.134 16.362 18.291 20.072 22.271 26.304 31.336 36.973 42.585 46.194 49.480 53.486 56.328 59.899 62.487

33 15.815 17.074 19.047 20.867 23.110 27.219 32.336 38.058 43.745 47.400 50.725 54.776 57.648 61.256 63.870

34 16.501 17.789 19.806 21.664 23.952 28.136 33.336 39.141 44.903 48.602 51.966 56.061 58.964 62.608 65.247

35 17.192 18.509 20.569 22.465 24.797 29.054 34.336 40.223 46.059 49.802 53.203 57.342 60.275 63.955 66.619

36 17.887 19.233 21.336 23.269 25.643 29.973 35.336 41.304 47.212 50.998 54.437 58.619 61.581 65.296 67.985

37 18.586 19.960 22.106 24.075 26.492 30.893 36.336 42.383 48.363 52.192 55.668 59.893 62.883 66.633 69.346

38 19.289 20.691 22.878 24.884 27.343 31.815 37.335 43.462 49.513 53.384 56.896 61.162 64.181 67.966 70.703

39 19.996 21.426 23.654 25.695 28.196 32.737 38.335 44.539 50.660 54.572 58.120 62.428 65.476 69.294 72.055

40 20.707 22.164 24.433 26.509 29.051 33.660 39.335 45.616 51.805 55.758 59.342 63.691 66.766 70.618 73.402

41 21.421 22.906 25.215 27.326 29.907 34.585 40.335 46.692 52.949 56.942 60.561 64.950 68.053 71.938 74.745

42 22.138 23.650 25.999 28.144 30.765 35.510 41.335 47.766 54.090 58.124 61.777 66.206 69.336 73.254 76.084

43 22.859 24.398 26.785 28.965 31.625 36.436 42.335 48.840 55.230 59.304 62.990 67.459 70.616 74.566 77.419

44 23.584 25.148 27.575 29.787 32.487 37.363 43.335 49.913 56.369 60.481 64.201 68.710 71.893 75.874 78.750

45 24.311 25.901 28.366 30.612 33.350 38.291 44.335 50.985 57.505 61.656 65.410 69.957 73.166 77.179 80.077

46 25.041 26.657 29.160 31.439 34.215 39.220 45.335 52.056 58.641 62.830 66.617 71.201 74.437 78.481 81.400

47 25.775 27.416 29.956 32.268 35.081 40.149 46.335 53.127 59.774 64.001 67.821 72.443 75.704 79.780 82.720
48 26.511 28.177 30.755 33.098 35.949 41.079 47.335 54.196 60.907 65.171 69.023 73.683 76.969 81.075 84.037

49 27.249 28.941 31.555 33.930 36.818 42.010 48.335 55.265 62.038 66.339 70.222 74.919 78.231 82.367 85.351

50 27.991 29.707 32.357 34.764 37.689 42.942 49.335 56.334 63.167 67.505 71.420 76.154 79.490 83.657 86.661

51 28.735 30.475 33.162 35.600 38.560 43.874 50.335 57.401 64.295 68.669 72.616 77.386 80.747 84.943 87.968

52 29.481 31.246 33.968 36.437 39.433 44.808 51.335 58.468 65.422 69.832 73.810 78.616 82.001 86.227 89.272

53 30.230 32.018 34.776 37.276 40.308 45.741 52.335 59.534 66.548 70.993 75.002 79.843 83.253 87.507 90.573

54 30.981 32.793 35.586 38.116 41.183 46.676 53.335 60.600 67.673 72.153 76.192 81.069 84.502 88.786 91.872

55 31.735 33.570 36.398 38.958 42.060 47.610 54.335 61.665 68.796 73.311 77.380 82.292 85.749 90.061 93.168

56 32.490 34.350 37.212 39.801 42.937 48.546 55.335 62.729 69.919 74.468 78.567 83.513 86.994 91.335 94.461

57 33.248 35.131 38.027 40.646 43.816 49.482 56.335 63.793 71.040 75.624 79.752 84.733 88.236 92.605 95.751

58 34.008 35.913 38.844 41.492 44.696 50.419 57.335 64.857 72.160 76.778 80.936 85.950 89.477 93.874 97.039

59 34.770 36.698 39.662 42.339 45.577 51.356 58.335 65.919 73.279 77.931 82.117 87.166 90.715 95.140 98.324

60 35.534 37.485 40.482 43.188 46.459 52.294 59.335 66.981 74.397 79.082 83.298 88.379 91.952 96.404 99.607

61 36.301 38.273 41.303 44.038 47.342 53.232 60.335 68.043 75.514 80.232 84.476 89.591 93.186 97.665 100.888

62 37.068 39.063 42.126 44.889 48.226 54.171 61.335 69.104 76.630 81.381 85.654 90.802 94.419 98.925 102.166

63 37.838 39.855 42.950 45.741 49.111 55.110 62.335 70.165 77.745 82.529 86.830 92.010 95.649 100.182 103.442

64 38.610 40.649 43.776 46.595 49.996 56.050 63.335 71.225 78.860 83.675 88.004 93.217 96.878 101.437 104.716
65 39.383 41.444 44.603 47.450 50.883 56.990 64.335 72.285 79.973 84.821 89.177 94.422 98.105 102.691 105.988

66 40.158 42.240 45.431 48.305 51.770 57.931 65.335 73.344 81.085 85.965 90.349 95.626 99.330 103.942 107.258

67 40.935 43.038 46.261 49.162 52.659 58.872 66.335 74.403 82.197 87.108 91.519 96.828 100.554 105.192 108.526

68 41.713 43.838 47.092 50.020 53.548 59.814 67.335 75.461 83.308 88.250 92.689 98.028 101.776 106.440 109.791

69 42.494 44.639 47.924 50.879 54.438 60.756 68.334 76.519 84.418 89.391 93.856 99.228 102.996 107.685 111.055

70 43.275 45.442 48.758 51.739 55.329 61.698 69.334 77.577 85.527 90.531 95.023 100.425 104.215 108.929 112.317

71 44.058 46.246 49.592 52.600 56.221 62.641 70.334 78.634 86.635 91.670 96.189 101.621 105.432 110.172 113.577

72 44.843 47.051 50.428 53.462 57.113 63.585 71.334 79.690 87.743 92.808 97.353 102.816 106.648 111.412 114.835

73 45.629 47.858 51.265 54.325 58.006 64.528 72.334 80.747 88.850 93.945 98.516 104.010 107.862 112.651 116.092

74 46.417 48.666 52.103 55.189 58.900 65.472 73.334 81.803 89.956 95.081 99.678 105.202 109.074 113.889 117.346

75 47.206 49.475 52.942 56.054 59.795 66.417 74.334 82.858 91.061 96.217 100.839 106.393 110.286 115.125 118.599

76 47.997 50.286 53.782 56.920 60.690 67.362 75.334 83.913 92.166 97.351 101.999 107.583 111.495 116.359 119.850

77 48.788 51.097 54.623 57.786 61.586 68.307 76.334 84.968 93.270 98.484 103.158 108.771 112.704 117.591 121.100

78 49.582 51.910 55.466 58.654 62.483 69.252 77.334 86.022 94.374 99.617 104.316 109.958 113.911 118.823 122.348

79 50.376 52.725 56.309 59.522 63.380 70.198 78.334 87.077 95.476 100.749 105.473 111.144 115.117 120.052 123.594

80 51.172 53.540 57.153 60.391 64.278 71.145 79.334 88.130 96.578 101.879 106.629 112.329 116.321 121.280 124.839

81 51.969 54.357 57.998 61.261 65.176 72.091 80.334 89.184 97.680 103.010 107.783 113.512 117.524 122.507 126.083
82 52.767 55.174 58.845 62.132 66.076 73.038 81.334 90.237 98.780 104.139 108.937 114.695 118.726 123.733 127.324

83 53.567 55.993 59.692 63.004 66.976 73.985 82.334 91.289 99.880 105.267 110.090 115.876 119.927 124.957 128.565

84 54.368 56.813 60.540 63.876 67.876 74.933 83.334 92.342 100.980 106.395 111.242 117.057 121.126 126.179 129.804

85 55.170 57.634 61.389 64.749 68.777 75.881 84.334 93.394 102.079 107.522 112.393 118.236 122.325 127.401 131.041

86 55.973 58.456 62.239 65.623 69.679 76.829 85.334 94.446 103.177 108.648 113.544 119.414 123.522 128.621 132.277

87 56.777 59.279 63.089 66.498 70.581 77.777 86.334 95.497 104.275 109.773 114.693 120.591 124.718 129.840 133.512

88 57.582 60.103 63.941 67.373 71.484 78.726 87.334 96.548 105.372 110.898 115.841 121.767 125.913 131.057 134.745

89 58.389 60.928 64.793 68.249 72.387 79.675 88.334 97.599 106.469 112.022 116.989 122.942 127.106 132.273 135.978

90 59.196 61.754 65.647 69.126 73.291 80.625 89.334 98.650 107.565 113.145 118.136 124.116 128.299 133.489 137.208

91 60.005 62.581 66.501 70.003 74.196 81.574 90.334 99.700 108.661 114.268 119.282 125.289 129.491 134.702 138.438

92 60.815 63.409 67.356 70.882 75.100 82.524 91.334 100.750 109.756 115.390 120.427 126.462 130.681 135.915 139.666

93 61.625 64.238 68.211 71.760 76.006 83.474 92.334 101.800 110.850 116.511 121.571 127.633 131.871 137.127 140.893

94 62.437 65.068 69.068 72.640 76.912 84.425 93.334 102.850 111.944 117.632 122.715 128.803 133.059 138.337 142.119

95 63.250 65.898 69.925 73.520 77.818 85.376 94.334 103.899 113.038 118.752 123.858 129.973 134.247 139.546 143.344

96 64.063 66.730 70.783 74.401 78.725 86.327 95.334 104.948 114.131 119.871 125.000 131.141 135.433 140.755 144.567

97 64.878 67.562 71.642 75.282 79.633 87.278 96.334 105.997 115.223 120.990 126.141 132.309 136.619 141.962 145.789

98 65.694 68.396 72.501 76.164 80.541 88.229 97.334 107.045 116.315 122.108 127.282 133.476 137.803 143.168 147.010
99 66.510 69.230 73.361 77.046 81.449 89.181 98.334 108.093 117.407 123.225 128.422 134.642 138.987 144.373 148.230

100 67.328 70.065 74.222 77.929 82.358 90.133 99.334 109.141 118.498 124.342 129.561 135.807 140.169 145.577 149.449

101 68.146 70.901 75.083 78.813 83.267 91.085 100.334 110.189 119.589 125.458 130.700 136.971 141.351 146.780 150.667

102 68.965 71.737 75.946 79.697 84.177 92.038 101.334 111.236 120.679 126.574 131.838 138.134 142.532 147.982 151.884

103 69.785 72.575 76.809 80.582 85.088 92.991 102.334 112.284 121.769 127.689 132.975 139.297 143.712 149.183 153.099

104 70.606 73.413 77.672 81.468 85.998 93.944 103.334 113.331 122.858 128.804 134.111 140.459 144.891 150.383 154.314

105 71.428 74.252 78.536 82.354 86.909 94.897 104.334 114.378 123.947 129.918 135.247 141.620 146.070 151.582 155.528

106 72.251 75.092 79.401 83.240 87.821 95.850 105.334 115.424 125.035 131.031 136.382 142.780 147.247 152.780 156.740

107 73.075 75.932 80.267 84.127 88.733 96.804 106.334 116.471 126.123 132.144 137.517 143.940 148.424 153.977 157.952

108 73.899 76.774 81.133 85.015 89.645 97.758 107.334 117.517 127.211 133.257 138.651 145.099 149.599 155.173 159.162

109 74.724 77.616 82.000 85.903 90.558 98.712 108.334 118.563 128.298 134.369 139.784 146.257 150.774 156.369 160.372

110 75.550 78.458 82.867 86.792 91.471 99.666 109.334 119.608 129.385 135.480 140.917 147.414 151.948 157.563 161.581

111 76.377 79.302 83.735 87.681 92.385 100.620 110.334 120.654 130.472 136.591 142.049 148.571 153.122 158.757 162.788

112 77.204 80.146 84.604 88.570 93.299 101.575 111.334 121.699 131.558 137.701 143.180 149.727 154.294 159.950 163.995

113 78.033 80.991 85.473 89.461 94.213 102.530 112.334 122.744 132.643 138.811 144.311 150.882 155.466 161.141 165.201

114 78.862 81.836 86.342 90.351 95.128 103.485 113.334 123.789 133.729 139.921 145.441 152.037 156.637 162.332 166.406

115 79.692 82.682 87.213 91.242 96.043 104.440 114.334 124.834 134.813 141.030 146.571 153.191 157.808 163.523 167.610
116 80.522 83.529 88.084 92.134 96.958 105.396 115.334 125.878 135.898 142.138 147.700 154.344 158.977 164.712 168.813

117 81.353 84.377 88.955 93.026 97.874 106.352 116.334 126.923 136.982 143.246 148.829 155.496 160.146 165.900 170.016

118 82.185 85.225 89.827 93.918 98.790 107.307 117.334 127.967 138.066 144.354 149.957 156.648 161.314 167.088 171.217

119 83.018 86.074 90.700 94.811 99.707 108.263 118.334 129.011 139.149 145.461 151.084 157.800 162.481 168.275 172.418

120 83.852 86.923 91.573 95.705 100.624 109.220 119.334 130.055 140.233 146.567 152.211 158.950 163.648 169.461 173.617

121 84.686 87.773 92.446 96.598 101.541 110.176 120.334 131.098 141.315 147.674 153.338 160.100 164.814 170.647 174.816

122 85.520 88.624 93.320 97.493 102.458 111.133 121.334 132.142 142.398 148.779 154.464 161.250 165.980 171.831 176.014

123 86.356 89.475 94.195 98.387 103.376 112.089 122.334 133.185 143.480 149.885 155.589 162.398 167.144 173.015 177.212

124 87.192 90.327 95.070 99.283 104.295 113.046 123.334 134.228 144.562 150.989 156.714 163.546 168.308 174.198 178.408

125 88.029 91.180 95.946 100.178 105.213 114.004 124.334 135.271 145.643 152.094 157.839 164.694 169.471 175.380 179.604

126 88.866 92.033 96.822 101.074 106.132 114.961 125.334 136.313 146.724 153.198 158.962 165.841 170.634 176.562 180.799

127 89.704 92.887 97.698 101.971 107.051 115.918 126.334 137.356 147.805 154.302 160.086 166.987 171.796 177.743 181.993

128 90.543 93.741 98.576 102.867 107.971 116.876 127.334 138.398 148.885 155.405 161.209 168.133 172.957 178.923 183.186

129 91.382 94.596 99.453 103.765 108.891 117.834 128.334 139.440 149.965 156.508 162.331 169.278 174.118 180.103 184.379

130 92.222 95.451 100.331 104.662 109.811 118.792 129.334 140.482 151.045 157.610 163.453 170.423 175.278 181.282 185.571

131 93.063 96.307 101.210 105.560 110.732 119.750 130.334 141.524 152.125 158.712 164.575 171.567 176.438 182.460 186.762

132 93.904 97.163 102.089 106.459 111.652 120.708 131.334 142.566 153.204 159.814 165.696 172.711 177.597 183.637 187.953
133 94.746 98.020 102.968 107.357 112.573 121.667 132.334 143.608 154.283 160.915 166.816 173.854 178.755 184.814 189.142

134 95.588 98.878 103.848 108.257 113.495 122.625 133.334 144.649 155.361 162.016 167.936 174.996 179.913 185.990 190.331

135 96.431 99.736 104.729 109.156 114.417 123.584 134.334 145.690 156.440 163.116 169.056 176.138 181.070 187.165 191.520

136 97.275 100.595 105.609 110.056 115.338 124.543 135.334 146.731 157.518 164.216 170.175 177.280 182.226 188.340 192.707

137 98.119 101.454 106.491 110.956 116.261 125.502 136.334 147.772 158.595 165.316 171.294 178.421 183.382 189.514 193.894

138 98.964 102.314 107.372 111.857 117.183 126.461 137.334 148.813 159.673 166.415 172.412 179.561 184.538 190.688 195.080

139 99.809 103.174 108.254 112.758 118.106 127.421 138.334 149.854 160.750 167.514 173.530 180.701 185.693 191.861 196.266

140 100.655 104.034 109.137 113.659 119.029 128.380 139.334 150.894 161.827 168.613 174.648 181.840 186.847 193.033 197.451

141 101.501 104.896 110.020 114.561 119.953 129.340 140.334 151.934 162.904 169.711 175.765 182.979 188.001 194.205 198.635

142 102.348 105.757 110.903 115.463 120.876 130.299 141.334 152.975 163.980 170.809 176.882 184.118 189.154 195.376 199.819

143 103.196 106.619 111.787 116.366 121.800 131.259 142.334 154.015 165.056 171.907 177.998 185.256 190.306 196.546 201.002

144 104.044 107.482 112.671 117.268 122.724 132.219 143.334 155.055 166.132 173.004 179.114 186.393 191.458 197.716 202.184

145 104.892 108.345 113.556 118.171 123.649 133.180 144.334 156.094 167.207 174.101 180.229 187.530 192.610 198.885 203.366

146 105.741 109.209 114.441 119.075 124.574 134.140 145.334 157.134 168.283 175.198 181.344 188.666 193.761 200.054 204.547

147 106.591 110.073 115.326 119.979 125.499 135.101 146.334 158.174 169.358 176.294 182.459 189.802 194.912 201.222 205.727

148 107.441 110.937 116.212 120.883 126.424 136.061 147.334 159.213 170.432 177.390 183.573 190.938 196.062 202.390 206.907

149 108.291 111.802 117.098 121.787 127.349 137.022 148.334 160.252 171.507 178.485 184.687 192.073 197.211 203.557 208.086
150 109.142 112.668 117.985 122.692 128.275 137.983 149.334 161.291 172.581 179.581 185.800 193.208 198.360 204.723 209.265

151 109.994 113.533 118.871 123.597 129.201 138.944 150.334 162.330 173.655 180.676 186.914 194.342 199.509 205.889 210.443

152 110.846 114.400 119.759 124.502 130.127 139.905 151.334 163.369 174.729 181.770 188.026 195.476 200.657 207.054 211.620

153 111.698 115.266 120.646 125.408 131.054 140.866 152.334 164.408 175.803 182.865 189.139 196.609 201.804 208.219 212.797

154 112.551 116.134 121.534 126.314 131.980 141.828 153.334 165.446 176.876 183.959 190.251 197.742 202.951 209.383 213.973

155 113.405 117.001 122.423 127.220 132.907 142.789 154.334 166.485 177.949 185.052 191.362 198.874 204.098 210.547 215.149

156 114.259 117.869 123.312 128.127 133.835 143.751 155.334 167.523 179.022 186.146 192.474 200.006 205.244 211.710 216.324

157 115.113 118.738 124.201 129.034 134.762 144.713 156.334 168.561 180.094 187.239 193.584 201.138 206.390 212.873 217.499

158 115.968 119.607 125.090 129.941 135.690 145.675 157.334 169.599 181.167 188.332 194.695 202.269 207.535 214.035 218.673

159 116.823 120.476 125.980 130.848 136.618 146.637 158.334 170.637 182.239 189.424 195.805 203.400 208.680 215.197 219.846

160 117.679 121.346 126.870 131.756 137.546 147.599 159.334 171.675 183.311 190.516 196.915 204.530 209.824 216.358 221.019

161 118.536 122.216 127.761 132.664 138.474 148.561 160.334 172.713 184.382 191.608 198.025 205.660 210.968 217.518 222.191

162 119.392 123.086 128.651 133.572 139.403 149.523 161.334 173.751 185.454 192.700 199.134 206.790 212.111 218.678 223.363

163 120.249 123.957 129.543 134.481 140.331 150.486 162.334 174.788 186.525 193.791 200.243 207.919 213.254 219.838 224.535

164 121.107 124.828 130.434 135.390 141.260 151.449 163.334 175.825 187.596 194.883 201.351 209.047 214.396 220.997 225.705

165 121.965 125.700 131.326 136.299 142.190 152.411 164.334 176.863 188.667 195.973 202.459 210.176 215.539 222.156 226.876

166 122.823 126.572 132.218 137.209 143.119 153.374 165.334 177.900 189.737 197.064 203.567 211.304 216.680 223.314 228.045
167 123.682 127.445 133.111 138.118 144.049 154.337 166.334 178.937 190.808 198.154 204.675 212.431 217.821 224.472 229.215

168 124.541 128.318 134.003 139.028 144.979 155.300 167.334 179.974 191.878 199.244 205.782 213.558 218.962 225.629 230.383

169 125.401 129.191 134.897 139.939 145.909 156.263 168.334 181.011 192.948 200.334 206.889 214.685 220.102 226.786 231.552

170 126.261 130.064 135.790 140.849 146.839 157.227 169.334 182.047 194.017 201.423 207.995 215.812 221.242 227.942 232.719

171 127.122 130.938 136.684 141.760 147.769 158.190 170.334 183.084 195.087 202.513 209.102 216.938 222.382 229.098 233.887

172 127.983 131.813 137.578 142.671 148.700 159.154 171.334 184.120 196.156 203.602 210.208 218.063 223.521 230.253 235.053

173 128.844 132.687 138.472 143.582 149.631 160.117 172.334 185.157 197.225 204.690 211.313 219.189 224.660 231.408 236.220

174 129.706 133.563 139.367 144.494 150.562 161.081 173.334 186.193 198.294 205.779 212.419 220.314 225.798 232.563 237.385

175 130.568 134.438 140.262 145.406 151.493 162.045 174.334 187.229 199.363 206.867 213.524 221.438 226.936 233.717 238.551

176 131.430 135.314 141.157 146.318 152.425 163.009 175.334 188.265 200.432 207.955 214.628 222.563 228.074 234.870 239.716

177 132.293 136.190 142.053 147.230 153.356 163.973 176.334 189.301 201.500 209.042 215.733 223.687 229.211 236.023 240.880

178 133.157 137.066 142.949 148.143 154.288 164.937 177.334 190.337 202.568 210.130 216.837 224.810 230.347 237.176 242.044

179 134.020 137.943 143.845 149.056 155.220 165.901 178.334 191.373 203.636 211.217 217.941 225.933 231.484 238.328 243.207

180 134.884 138.820 144.741 149.969 156.153 166.865 179.334 192.409 204.704 212.304 219.044 227.056 232.620 239.480 244.370

181 135.749 139.698 145.638 150.882 157.085 167.830 180.334 193.444 205.771 213.391 220.148 228.179 233.755 240.632 245.533

182 136.614 140.576 146.535 151.796 158.018 168.794 181.334 194.480 206.839 214.477 221.251 229.301 234.891 241.783 246.695

183 137.479 141.454 147.432 152.709 158.951 169.759 182.334 195.515 207.906 215.563 222.353 230.423 236.026 242.933 247.857
184 138.344 142.332 148.330 153.623 159.883 170.724 183.334 196.550 208.973 216.649 223.456 231.544 237.160 244.084 249.018

185 139.210 143.211 149.228 154.538 160.817 171.688 184.334 197.586 210.040 217.735 224.558 232.665 238.294 245.234 250.179

186 140.077 144.090 150.126 155.452 161.750 172.653 185.334 198.621 211.106 218.820 225.660 233.786 239.428 246.383 251.339

187 140.943 144.970 151.024 156.367 162.684 173.618 186.334 199.656 212.173 219.906 226.761 234.907 240.561 247.532 252.499

188 141.810 145.850 151.923 157.282 163.617 174.583 187.334 200.690 213.239 220.991 227.863 236.027 241.694 248.681 253.659

189 142.678 146.730 152.822 158.197 164.551 175.549 188.334 201.725 214.305 222.076 228.964 237.147 242.827 249.829 254.818

190 143.545 147.610 153.721 159.113 165.485 176.514 189.334 202.760 215.371 223.160 230.064 238.266 243.959 250.977 255.976

191 144.413 148.491 154.621 160.028 166.419 177.479 190.334 203.795 216.437 224.245 231.165 239.386 245.091 252.124 257.135

192 145.282 149.372 155.521 160.944 167.354 178.445 191.334 204.829 217.502 225.329 232.265 240.505 246.223 253.271 258.292

193 146.150 150.254 156.421 161.860 168.288 179.410 192.334 205.864 218.568 226.413 233.365 241.623 247.354 254.418 259.450

194 147.020 151.135 157.321 162.776 169.223 180.376 193.334 206.898 219.633 227.496 234.465 242.742 248.485 255.564 260.607

195 147.889 152.017 158.221 163.693 170.158 181.342 194.334 207.932 220.698 228.580 235.564 243.860 249.616 256.710 261.763

196 148.759 152.900 159.122 164.610 171.093 182.308 195.334 208.966 221.763 229.663 236.664 244.977 250.746 257.855 262.920

197 149.629 153.782 160.023 165.527 172.029 183.273 196.334 210.000 222.828 230.746 237.763 246.095 251.876 259.001 264.075

198 150.499 154.665 160.925 166.444 172.964 184.239 197.334 211.034 223.892 231.829 238.861 247.212 253.006 260.145 265.231

199 151.370 155.548 161.826 167.361 173.900 185.205 198.334 212.068 224.957 232.912 239.960 248.329 254.135 261.290 266.386

200 152.241 156.432 162.728 168.279 174.835 186.172 199.334 213.102 226.021 233.994 241.058 249.445 255.264 262.434 267.541
201 153.112 157.316 163.630 169.196 175.771 187.138 200.334 214.136 227.085 235.077 242.156 250.561 256.393 263.578 268.695

202 153.984 158.200 164.532 170.114 176.707 188.104 201.334 215.170 228.149 236.159 243.254 251.677 257.521 264.721 269.849

203 154.856 159.084 165.435 171.032 177.643 189.071 202.334 216.203 229.213 237.240 244.351 252.793 258.649 265.864 271.002

204 155.728 159.969 166.338 171.951 178.580 190.037 203.334 217.237 230.276 238.322 245.448 253.908 259.777 267.007 272.155

205 156.601 160.854 167.241 172.869 179.516 191.004 204.334 218.270 231.340 239.403 246.545 255.023 260.904 268.149 273.308

206 157.474 161.739 168.144 173.788 180.453 191.970 205.334 219.303 232.403 240.485 247.642 256.138 262.031 269.291 274.460

207 158.347 162.624 169.047 174.707 181.390 192.937 206.334 220.337 233.466 241.566 248.739 257.253 263.158 270.432 275.612

208 159.221 163.510 169.951 175.626 182.327 193.904 207.334 221.370 234.529 242.647 249.835 258.367 264.285 271.574 276.764

209 160.095 164.396 170.855 176.546 183.264 194.871 208.334 222.403 235.592 243.727 250.931 259.481 265.411 272.715 277.915

210 160.969 165.283 171.759 177.465 184.201 195.838 209.334 223.436 236.655 244.808 252.027 260.595 266.537 273.855 279.066

211 161.843 166.169 172.664 178.385 185.139 196.805 210.334 224.469 237.717 245.888 253.122 261.708 267.662 274.995 280.217

212 162.718 167.056 173.568 179.305 186.076 197.772 211.334 225.502 238.780 246.968 254.218 262.821 268.788 276.135 281.367

213 163.593 167.943 174.473 180.225 187.014 198.739 212.334 226.534 239.842 248.048 255.313 263.934 269.912 277.275 282.517

214 164.469 168.831 175.378 181.145 187.952 199.707 213.334 227.567 240.904 249.128 256.408 265.047 271.037 278.414 283.666

215 165.344 169.718 176.283 182.066 188.890 200.674 214.334 228.600 241.966 250.207 257.503 266.159 272.162 279.553 284.815

216 166.220 170.606 177.189 182.987 189.828 201.642 215.334 229.632 243.028 251.286 258.597 267.271 273.286 280.692 285.964

217 167.096 171.494 178.095 183.907 190.767 202.609 216.334 230.665 244.090 252.365 259.691 268.383 274.409 281.830 287.112
218 167.973 172.383 179.001 184.828 191.705 203.577 217.334 231.697 245.151 253.444 260.785 269.495 275.533 282.968 288.261

219 168.850 173.271 179.907 185.750 192.644 204.544 218.334 232.729 246.213 254.523 261.879 270.606 276.656 284.106 289.408

220 169.727 174.160 180.813 186.671 193.582 205.512 219.334 233.762 247.274 255.602 262.973 271.717 277.779 285.243 290.556

221 170.604 175.050 181.720 187.593 194.521 206.480 220.334 234.794 248.335 256.680 264.066 272.828 278.902 286.380 291.703

222 171.482 175.939 182.627 188.514 195.460 207.448 221.334 235.826 249.396 257.758 265.159 273.939 280.024 287.517 292.850

223 172.360 176.829 183.534 189.436 196.400 208.416 222.334 236.858 250.457 258.837 266.252 275.049 281.146 288.653 293.996

224 173.238 177.719 184.441 190.359 197.339 209.384 223.334 237.890 251.517 259.914 267.345 276.159 282.268 289.789 295.142

225 174.116 178.609 185.348 191.281 198.278 210.352 224.334 238.922 252.578 260.992 268.438 277.269 283.390 290.925 296.288

226 174.995 179.499 186.256 192.203 199.218 211.320 225.334 239.954 253.638 262.070 269.530 278.379 284.511 292.061 297.433

227 175.874 180.390 187.164 193.126 200.158 212.288 226.334 240.985 254.699 263.147 270.622 279.488 285.632 293.196 298.579

228 176.753 181.281 188.072 194.049 201.097 213.257 227.334 242.017 255.759 264.224 271.714 280.597 286.753 294.331 299.723

229 177.633 182.172 188.980 194.972 202.037 214.225 228.334 243.049 256.819 265.301 272.806 281.706 287.874 295.465 300.868

230 178.512 183.063 189.889 195.895 202.978 215.194 229.334 244.080 257.879 266.378 273.898 282.814 288.994 296.600 302.012

231 179.392 183.955 190.797 196.818 203.918 216.162 230.334 245.112 258.939 267.455 274.989 283.923 290.114 297.734 303.156

232 180.273 184.847 191.706 197.742 204.858 217.131 231.334 246.143 259.998 268.531 276.080 285.031 291.234 298.867 304.299

233 181.153 185.739 192.615 198.665 205.799 218.099 232.334 247.174 261.058 269.608 277.171 286.139 292.353 300.001 305.443

234 182.034 186.631 193.524 199.589 206.739 219.068 233.334 248.206 262.117 270.684 278.262 287.247 293.472 301.134 306.586
235 182.915 187.524 194.434 200.513 207.680 220.037 234.334 249.237 263.176 271.760 279.352 288.354 294.591 302.267 307.728

236 183.796 188.417 195.343 201.437 208.621 221.006 235.334 250.268 264.235 272.836 280.443 289.461 295.710 303.400 308.871

237 184.678 189.310 196.253 202.362 209.562 221.975 236.334 251.299 265.294 273.911 281.533 290.568 296.828 304.532 310.013

238 185.560 190.203 197.163 203.286 210.503 222.944 237.334 252.330 266.353 274.987 282.623 291.675 297.947 305.664 311.154

239 186.442 191.096 198.073 204.211 211.444 223.913 238.334 253.361 267.412 276.062 283.713 292.782 299.065 306.796 312.296

240 187.324 191.990 198.984 205.135 212.386 224.882 239.334 254.392 268.471 277.138 284.802 293.888 300.182 307.927 313.437

241 188.207 192.884 199.894 206.060 213.327 225.851 240.334 255.423 269.529 278.213 285.892 294.994 301.300 309.058 314.578

242 189.090 193.778 200.805 206.985 214.269 226.820 241.334 256.453 270.588 279.288 286.981 296.100 302.417 310.189 315.718

243 189.973 194.672 201.716 207.911 215.210 227.790 242.334 257.484 271.646 280.362 288.070 297.206 303.534 311.320 316.859

244 190.856 195.567 202.627 208.836 216.152 228.759 243.334 258.515 272.704 281.437 289.159 298.311 304.651 312.450 317.999

245 191.739 196.462 203.539 209.762 217.094 229.729 244.334 259.545 273.762 282.511 290.248 299.417 305.767 313.580 319.138

246 192.623 197.357 204.450 210.687 218.036 230.698 245.334 260.576 274.820 283.586 291.336 300.522 306.883 314.710 320.278

247 193.507 198.252 205.362 211.613 218.979 231.668 246.334 261.606 275.878 284.660 292.425 301.626 307.999 315.840 321.417

248 194.391 199.147 206.274 212.539 219.921 232.637 247.334 262.636 276.935 285.734 293.513 302.731 309.115 316.969 322.556

249 195.276 200.043 207.186 213.465 220.863 233.607 248.334 263.667 277.993 286.808 294.601 303.835 310.231 318.098 323.694

250 196.161 200.939 208.098 214.392 221.806 234.577 249.334 264.697 279.050 287.882 295.689 304.940 311.346 319.227 324.832

300 240.663 245.972 253.912 260.878 269.068 283.135 299.334 316.138 331.789 341.395 349.874 359.906 366.844 375.369 381.425
350 285.608 291.406 300.064 307.648 316.550 331.810 349.334 367.464 384.306 394.626 403.723 414.474 421.900 431.017 437.488

400 330.903 337.155 346.482 354.641 364.207 380.577 399.334 418.697 436.649 447.632 457.305 468.724 476.606 486.274 493.132

450 376.483 383.163 393.118 401.817 412.007 429.418 449.334 469.855 488.849 500.456 510.670 522.717 531.026 541.212 548.432

500 422.303 429.388 439.936 449.147 459.926 478.323 499.333 520.950 540.930 553.127 563.852 576.493 585.207 595.882 603.446

550 468.328 475.796 486.910 496.607 507.947 527.281 549.333 571.992 592.909 605.667 616.878 630.084 639.183 650.324 658.215

600 514.529 522.365 534.019 544.180 556.056 576.286 599.333 622.988 644.800 658.094 669.769 683.516 692.982 704.568 712.771

650 560.885 569.074 581.245 591.853 604.242 625.331 649.333 673.942 696.614 710.421 722.542 736.807 746.625 758.639 767.141

700 607.380 615.907 628.577 639.613 652.497 674.413 699.333 724.861 748.359 762.661 775.211 789.974 800.131 812.556 821.347

750 653.997 662.852 676.003 687.452 700.814 723.526 749.333 775.747 800.043 814.822 827.785 843.029 853.514 866.336 875.404

800 700.725 709.897 723.513 735.362 749.185 772.669 799.333 826.604 851.671 866.911 880.275 895.984 906.786 919.991 929.329

850 747.554 757.033 771.099 783.337 797.607 821.839 849.333 877.435 903.249 918.937 932.689 948.848 959.957 973.534 983.133

900 794.475 804.252 818.756 831.370 846.075 871.032 899.333 928.241 954.782 970.904 985.032 1001.630 1013.036 1026.974 1036.826

950 841.480 851.547 866.477 879.457 894.584 920.248 949.333 979.026 1006.272 1022.816 1037.311 1054.334 1066.031 1080.320 1090.418

1000 888.564 898.912 914.257 927.594 943.133 969.484 999.333 1029.790 1057.724 1074.679 1089.531 1106.969 1118.948 1133.579 1143.917

You might also like