You are on page 1of 2

Nonlinear Equations

Bisection Method
Bracketing and finding the mid. of each the bracket each time. Each time estimate is taken as
halfway between L and R bracket. Depending on whether estimate is > or < 0. Always converges as
long as initial brackets are valid. Always converges at a useful rate (half error per step).
Regula Falsi Method
Estimates slope over bracket by

( )

. From this slope, the point where f(x) = 0 is

found by extending it form the LHS or RHS bracket until f(x) = 0 according to the approximated f.
Iterated values are: =
. We choose next bracket in the same way we choose next
bracket for bisection. Always converges if initial brackets are valid. Doesnt always converge quickly
though.
Secant method
Root finding method that uses succession of roots of secant lines to better approximate function f.
Effectively draws a line from a to b, and asks, Where on this line does the zero lie?.
Fixed Point Iteration
Fixed point iteration finds values of functions at specific fixed points by using the formula
=
( ). Fixed points are points where the function f(x) = x. Used for implicit functions (e.g. finding
channel depth using Darcy-Weisbach).
Newton-Rhapson
Method for finding roots of functions. Extends from point given to zero according to derivative at
point given. In other words:

. Doesnt always work (e.g. if derivative doesnt exist,

stationary points are encountered, etc.)

Systems of Linear Equations


When Are Systems of Linear Equations Solvable?
Systems of linear equations are solvable with a unique solution when the number of equations is
equal to the number of unknowns, and each of the equations are linearly independent. Other
conditions that indicate solutions exist include diagonal dominance (entries on diagonal of a given
row are greater than all other entries on the same row).
Jacobi Iteration
If
= then we can split up our matrix A into two matrices, D and R, where D is only the diagonal
entries along the diagonal, and R is everything else. The solution for our x values are given by
=
(
). Solution element by element is:
= (
) for elements
=1

of the x vector.

Numerical vs. Exact Methods


Numerical methods tend to be preferable when the matrix is both sparse and large, as they are
quicker in these cases. Exact methods (e.g. Gaussian Elimination) are preferred for smaller or fully
filled matrices as they tend to be faster in these cases.

Numerical Analysis and Quadrature


Lower and Upper/Left and Right Rectangular, Trapezoidal Integration Methods
Method of estimating values of integrals by taking values at left/right or top/bottom values of a
function over an interval. Total value of the integral is the sum of all of the values of the estimations
for each strip made using this rule. Trapezoidal is the average of the left and right integral. AKA
Riemann sum.
Taylor Series
Method of approximating a function around a point by using the value of the value of the functions
derivatives at that point. Takes the form of a summation, seen in useful formulae.
Simpsons Rule
Approximates a function ( ) over the interval =
accurate than rectangular/trapezoidal methods.

using a quadratic. Generally, more

Forward/Backward Differentiation
Estimation of a derivative by projecting forward some from a given . Then the estimation for
( ) is

( )=

( )

for either positive (forward difference, +ve) or negative (backward

difference -ve).
Implicit/Explicit Euler (IE and EE)
) = ( ) + ( ). The difference between EE and IE is
Explicit Euler is formulated as (
that while EE takes the value of the derivative at the start of the time step, IE takes the derivative at
)= (
) + ( ). This
the end of the time step. In other words, IE is formulated as: (
). While EE
makes IE implicit as the value of the function at (
) depends on the value of (
is much easier to deal with, IE is more stable. EE is sometimes unstable, but EE is normally OK (but
not always).

Probability and Statistics


Useful Definitions
PMF = probability mass function (countable events probability)
PDF = probability density function (continuous probability)
CDF = cumulative density function, integral of PDF
Outlier = data point lying outside of the normal range for the data

Bayes Theorem
Bayes theorem concerns dependent probability. Specifically:
( | )=

( | ) ( )
( )

Useful Formulae

. Think in terms of Venn Diagrams.

Normal and Lognormal Distributions


Normal distribution you know this. Some mean , some standard deviation . 68/95/99.7 rule for
= 1, 2, 3 respectively. Lognormal distribution the exponential form of the normal distribution.
Used for distributions such as where anything less than zero doesnt exist, for example.
Bernoulli and Binomial Distributions
Bernoulli distribution is the distribution for discrete events with a given success rate. The binomial
distribution is closely related to the Bernoulli distribution, as the binomial distribution is formed by
repeated trials of the Bernoulli distribution. They are related by: ( ) =
the probability of k successes,

, where ( ) is

is the probability of the success of an individual event and

number of trials. The formula for

!
!(

)!

is the

Poisson Distribution
The Poisson distribution is a distribution describing occurrences of discrete events with a given rate,
. Poisson is used for modelling things like the number of calls a call centre receives, or the number
of letters received in a mailbox numbers of events independent of one another where multiple
occurrences are possible. The PMF is given by ( ) =
number of events, and is the exponential function.

, where is the rate given,

Linear Regression Model


Model used to approximate a distribution of data that is linear. Comes in the form =
where and are parameters. Can be calculated using formula given to the right.

(
=

What do we do with outliers?


Lots of outliers can indicate that the data may be invalid, then they might need to be removed. Deal
with on case by case basis.

Fixed point iteration

Simpsons Rule for 3 points

( )

)
( )
( )
)
)

[ ( ) + 4 ( ) + ( )]
3
( )=

(
(

) = (
)=

( | )=

Bayes Theorem
(

( )
( )
!

( )+ ( + )

( )=

Explicit Euler

) + (

) + (

( | ) ( )
( )

) = 0.68, 0.95, 0.997

Linear regression formula:


Linear regression formula:

Forward/Backward differentiation

= , 2 , 3

( )

Implicit Euler

= (

Element by element Jacobi


iteration
Element by element Gauss-Seidel
iteration

) (

Newtons method

Taylor Series

Monte Carlo Methods


Methods used to calculate distributions when relationships are more complicated for example,
the amount of work needed is normally distributed, but the cost is proportional to the work squared
then modelling cost requires Monte Carlo methods. Randomly generate numbers according to the
distribution then use the values generated to create the model.

Bayesian Inference
Process of using Bayes Theorem to inform further given information.

Secant Method

is the

+ x,

( )

Regula Falsi next estimate

=
=

You might also like