You are on page 1of 19

Harvard College

Math 21b: Linear Algebra and


Differential Equations
Formula and Theorem Review

Tommy MacWilliam, 13
tmacwilliam@college.harvard.edu
May 5, 2010

Contents
Table of Contents

1 Linear Equations
1.1 Standard Representation of a Vector
1.2 Reduced Row Echelon Form . . . . .
1.3 Elementary Row Operations . . . . .
1.4 Rank of a Matrix . . . . . . . . . . .
1.5 Dot Product of Vectors . . . . . . . .
1.6 The Product A~x . . . . . . . . . . .
1.7 Algebraic Rules for A~x . . . . . . . .

.
.
.
.
.
.
.

5
5
5
5
5
5
5
5

.
.
.
.
.
.
.
.
.
.

6
6
6
6
6
6
6
7
7
7
7

.
.
.
.
.
.
.
.
.
.
.

8
8
8
8
8
8
9
9
9
9
9
9

4 Linear Spaces
4.1 Linear Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2 Finding a Basis of a Linear Space V . . . . . . . . . . . . . . . . . . . . . . .
4.3 Isomorphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

10
10
10
10

2 Linear Transformations
2.1 Linear Transformations . . . . . . .
2.2 Scaling Matrix . . . . . . . . . . . .
2.3 Orthogonal Projection onto a Line
2.4 Reflection Matrix . . . . . . . . . .
2.5 Rotation Matrix . . . . . . . . . . .
2.6 Shear Matrix . . . . . . . . . . . .
2.7 Matrix Multiplication . . . . . . . .
2.8 Invertibility . . . . . . . . . . . . .
2.9 Finding the Inverse . . . . . . . . .
2.10 Properties of Invertible Matrices . .

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

3 Subspaces of Rn and Their Dimensions


3.1 Image of a Function . . . . . . . . . . .
3.2 Span . . . . . . . . . . . . . . . . . . .
3.3 Kernel . . . . . . . . . . . . . . . . . .
3.4 Subspaces of Rn . . . . . . . . . . . . .
3.5 Linear Independence . . . . . . . . . .
3.6 Dimension . . . . . . . . . . . . . . . .
3.7 Rank-Nullity Theorem . . . . . . . . .
3.8 Coordinates . . . . . . . . . . . . . . .
3.9 Linearity of Coordinates . . . . . . . .
3.10 Matrix of a Linear Transformation . .
3.11 Similar Matrices . . . . . . . . . . . . .

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

5 Orthogonality and Least Squares


5.1 Orthongality and Length . . . . . . . . . .
5.2 Orthonormal Vectors . . . . . . . . . . . .
5.3 Orthogonal Projection onto a Subspace . .
5.4 Orthogonal Complement . . . . . . . . . .
5.5 Pythongorean Theorem . . . . . . . . . . .
5.6 Cauchy-Schwarz Inequality . . . . . . . . .
5.7 Angle Between Two Vectors . . . . . . . .
5.8 Gram-Schmidt Process . . . . . . . . . . .
5.9 QR Decomposition . . . . . . . . . . . . .
5.10 Orthongonal Transformation . . . . . . . .
5.11 Transpose . . . . . . . . . . . . . . . . . .
5.12 Symmetric and Skew Symmetric Matrices
5.13 Matrix of an Orthogonal Projection . . . .
5.14 Least-Squares Solution . . . . . . . . . . .
5.15 Inner Product Spaces . . . . . . . . . . . .
5.16 Norm and Orthongonality . . . . . . . . .
5.17 Trace of a Matrix . . . . . . . . . . . . . .
5.18 Orthogonal Projection of an Inner Product
5.19 Fourier Analysis . . . . . . . . . . . . . . .
6 Determinants
6.1 Sarruss Rule . . . . . . . . . .
6.2 Patterns . . . . . . . . . . . . .
6.3 Determinants and Gauss-Jordan
6.4 Laplace Expansion . . . . . . .
6.5 Properties of the Determinant .

. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
Space
. . . .

. . . . . . .
. . . . . . .
Elimination
. . . . . . .
. . . . . . .

7 Eigenvalues and Eigenvectors


7.1 Eigenvalues . . . . . . . . . . . . . .
7.2 Characteristic Polynomial . . . . . .
7.3 Algebraic Multiplicity . . . . . . . .
7.4 Eigenvalues, Determinant, and Trace
7.5 Eigenspace . . . . . . . . . . . . . . .
7.6 Eigenbasis . . . . . . . . . . . . . . .
7.7 Geometric Multiplicty . . . . . . . .
7.8 Diagonalization . . . . . . . . . . . .
7.9 Powers of a Diagonalizable Matrix . .
7.10 Stable Equilibrium . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

11
11
11
11
11
11
11
12
12
12
12
12
13
13
13
13
13
13
14
14

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

14
14
14
15
15
15

.
.
.
.
.
.
.
.
.
.

15
15
15
16
16
16
16
16
16
16
17

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

9 Linear Differential Equations


9.1 Exponential Growth and Decay . . . . . . . . . . . . . . .
9.2 Linear Dynamical Systems . . . . . . . . . . . . . . . . . .
9.3 Continuous Dynamical Systems with Real Eigenvalues . .
9.4 Continuous Dynamical Systems with Complex Eigenvalues
9.5 Strategy for Solving Linear Differential Equations . . . . .
9.6 Eigenfunctions . . . . . . . . . . . . . . . . . . . . . . . . .
9.7 Characteristic Polynomial of a Linear Differential Operator
9.8 Kernel of a Linear Differential Operator . . . . . . . . . .
9.9 Characteristic Polynomial with Complex Solution . . . . .
9.10 First-Order Linear Differential Equations . . . . . . . . . .
9.11 Strategy for Solving Linear Differential Equations . . . . .
9.12 Linearization of a Nonlinear System . . . . . . . . . . . . .
9.13 The Heat Equation . . . . . . . . . . . . . . . . . . . . . .
9.14 The Wave Equation . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

17
17
17
17
17
17
18
18
18
18
18
18
19
19
19

Linear Equations

1.1

Standard Representation of a Vector



~v =

1.2

x
y

Reduced Row Echelon Form

If a column contains a leading 1, then all ther other entires in that column are 0.
If a row contains a leading 1, then each row above it contains a leading 1 further to
the left.

1.3

Elementary Row Operations

Divide a row by a nonzero scalar.


Subtract a multiple of a row from another row.
Swap two rows.

1.4

Rank of a Matrix

The rank of a matrix A is the number of leading 1s in rref (A).

1.5

Dot Product of Vectors


~v w
~ = v1 w1 + + vn wn

1.6

The Product A~x

A~x =

1.7

w
~1
w
~ 1 ~x

..
..

~x =
.
.
w
~ n ~x
w
~n

Algebraic Rules for A~x

A(~x~y ) = A~x + A~y


A(k~x) = k(A~x)

Linear Transformations

2.1

Linear Transformations

A function T from Rm to Rn is called a linear transformation if there exists a matrix A such


that
T (~x) = A~x
A transformation is called linear if and only if
T (~v + w)
~ = T (~v ) + T (w)
~
T (k~v ) = kT (~v )

2.2

Scaling Matrix

A scaling matrix by k has the form:




2.3

k 0
0 k

Orthogonal Projection onto a Line



projL (~x) =

~x w
~
w
~ w
~


w
~

where w
~ is a vector parallel to the line L.

2.4

Reflection Matrix

A reflection matrix has the form:

2.5

a b
b a

Rotation Matrix

A rotation matrix has the form:

2.6

cos sin
sin cos

Shear Matrix

A horizontal shear matrix has the form:




1 k
0 1

1 0
k 1

and a vertical shear matrix has the form:

2.7

Matrix Multiplication

The product of matrices BA is defined as the matrix of the linear transformation T (~x) =
B(A~x) = (BA)~x.
(AB)C = A(BC)
A(B + C) = AB + AC
(kA)B = A(kB) = k(AB)

2.8

Invertibility

An n n matrix A is invertible if and only if:


1. rref (A) = In
2. rank(A) = n
or, more simply, if and only if det(A) 6= 0.

2.9

Finding the Inverse

1. Form the n (2n) matrix [A|In ]


2. Compute rref [A|In ]
3. If rref [A|In ] is of the form [In |B] then A1 = B
4. If rref [A|In ] is of another form, then A is not invertible

2.10

Properties of Invertible Matrices

If an n n matrix A is invertible:
The linear system A~x = ~b has a unique solution ~x for all ~b in Rn
rref (A) = In
rank(A) = n
im(A) = Rn
ker(A) = ~0
The column vectors of A are linearly independent and form a basis of Rn

Subspaces of Rn and Their Dimensions

3
3.1

Image of a Function
image(f ) = {f (x) : x in X} = {b in Y : b = f (x), for some x in X}

For a linear transformation T :


The zero vector is in the image of T
If ~v1 and ~v2 are in the image of T , then so is ~v1 + ~v2
If ~v is in the image of T , then so is k~v

3.2

Span
span(~v1 , , ~vn ) = {c1~v1 + + cm~vm : c1 cn in R}

3.3

Kernel

The kernel of a linear transformation T is the solution set of the linear system:
A~x = ~0
The zero vector is in the kernel of T
If ~v1 and ~v2 are in the kerne; of T , then so is ~v1 + ~v2
If ~v is in the kernel of T , then so is k~v

3.4

Subspaces of Rn

A subset W of the vector space Rn is a linear subspace if and only if:


1. W contains the zero vector
2. W is closed under addition
3. W is closed under scalar multiplication
The image and kernel of a linear transformation are linear subspaces.

3.5

Linear Independence

1. If a vector ~vi in ~v1 , , ~vn can be expressed as a linear combination of the vectors
~v1 , , ~vi1 , then ~vi is redundant.
2. The vectors ~v1 , , ~vn are linearly independent if no vector is redundant.
3. The vectors ~v1 , , ~vn form a basis of a subspace V if they span V and are linearly
independent
8

3.6

Dimension

The number of vectors in a subspace V is the dimension of V .

3.7

Rank-Nullity Theorem

For an n m matrix A:
dim(kerA) + dim(imA) = m

3.8

Coordinates

For a basis B = (~v1 , , ~vn ) of a subspace V , any vector ~x can be written as:
~x = c1~v1 + + cn~vn
c1 , , cn are called the B-coordinates of ~x, with

c1

[~x]B = ...
cn
~x = S[~x]B

and [~x]B = S 1~x

where S = [v1 vn ]

3.9

Linearity of Coordinates

[~x + ~y ]B = [~x]B + [~y ]B


[k~x]B = k[~x]B

3.10

Matrix of a Linear Transformation

The matrix B that transforms [~x]B into [T (~x)]B is called the B-matrix of T :
[T (~x)]B = B[~x]B ,

3.11

where B = [ [T (~v1 )]B [T (~vn )]B ]

Similar Matrices

Two matrices A and B are similar if and only if:


AS = SB

or B = S 1 AS

Linear Spaces

4.1

Linear Spaces

A linear space V is a set with rules for addition and scalar multiplication that satisfies the
following properties:
1. (f + g) + h = f + (g + h)
2. f + g = g + f
3. There exists a neutral element n in V such that f + n = f
4. For each f in V there exists a g in V such that f + g = 0
5. k(f + g) = kf + kg
6. (c + k)f = cf + kf
7. c(kf ) = (ck)f
8. 1f = f

4.2

Finding a Basis of a Linear Space V

1. Find a typical element w of V in terms of arbitrary constants.


2. Express w as a linear combination of elements in V .
3. If these elements are linearly independent, they will for a basis of V .

4.3

Isomorphisms

A linear transformation T is an isomorphism of T is invertible.


A linear transformation T from V to W is an isomorphism if any only if ker(T ) = 0
and im(T ) = W .
Coordinate changes are isomorphisms.
If V is isomorphic to W , then dim(V ) = dim(W ).

10

Orthogonality and Least Squares

5.1

Orthongality and Length

Two vectors ~c and w


~ are orthogonal if ~v w
~ = 0.

The length of a vector ~v is ||~v || = ~v ~v .


A vector ~u is a unit vector if its length is 1.

5.2

Orthonormal Vectors

The vectors ~u1 , , un are orthonormal if they are unit vectors orthogonal to each other.

1 if i = j
~ui ~uj =
0 if i 6= j

5.3

Orthogonal Projection onto a Subspace

If V is a subspace with an orthonormal basis ~u1 , , ~un :


projV (~x) = (~u1 ~x)~u1 + + (~un ~x)~un

5.4

Orthogonal Complement

The orthogonal complement V of a subspace V is the set of vectors ~x orthogonal to all


vectors in V :
V = {~x in Rn : ~v ~x = 0, for all ~v in V }
V is a subspace of V
V V = ~0
dim(V ) + dim(V ) = n
(V ) = V

5.5

Pythongorean Theorem
||~x + ~y ||2 = ||~x||2 + ||~y ||2

5.6

Cauchy-Schwarz Inequality
||~x ~y || ||~x||||~y ||

11

5.7

Angle Between Two Vectors


= arccos

5.8

~x ~y
||~x||||~y ||

Gram-Schmidt Process

For a basis ~v1 , , ~vn of a subspace V :


1
1
~u1 =
~v1 , , ~un =
||~v1 ||
||~vn ||
where
~vi = ~vi (~u1 vi )~u1 (~ui1 vi )~vi1

5.9

QR Decomposition

For an n m matrix M , M = QR, where Q is an n m matrix whose columns ~u1 , , ~un


are orthonormal and R has entries satisfying:
r1 1 = ||~v1 ||,

5.10

rjj = ||~vj ||,

rij = ~ui ~vj for i < j

Orthongonal Transformation

A linear transformation T is considered orthogonal if it preserves the length of vectors, such


that
||T (~x)|| = ||~x||
T is orthogonal if the vectors T (~e1 ), , T (~en ) form an orthonormal basis of Rn .
The matrix A is orthogonal if AT A = In .
The matrix A is orthogonal if A1 = AT .
A matrix A is orthogonal if its columns form an orthonormal basis of Rn .
The product AB of two orthogonal matrices A and B is orthogonal.
The inverse A1 of an orthogonal matrix A is orthogonal.

5.11

Transpose

The transpose AT of an m n matrix A is the n m matrix whose ijth entry is the jith
entry of A.
(AB)T = B T AT
(AT )1 = (A1 )T
rank(A) = rank(AT ).
12

5.12

Symmetric and Skew Symmetric Matrices

An n n matrix A is symmetric if AT = A.
An n n matrix A is skew-symmetric if AT = A.

5.13

Matrix of an Orthogonal Projection

The orthongonal projection onto a subspace V with an orthonormal basis ~u1 , , ~un is
QQT ,

where Q = [~u1 ~un ]

or equivalently,
A(AT A)1 AT

5.14

where A = [~v1 ~vn ]

Least-Squares Solution

The unique least-squares solution of a linear system A~x = ~b where ker(A) = ~0 is


~x = (AT A)1 AT~b

5.15

Inner Product Spaces

The inner product of a linear space V , denoted hf, gi, has the following properties:
hf, gi = hg, f i
hf + h, gi = hf, gi + hh, gi
hcf, gi = chf, gi
hf, f i > 0

5.16

Norm and Orthongonality

The norm of an element f of an inner product space is


p
||f || = hf, f i
Two elements f and g of an inner product space are orthongonal if
hf, gi = 0

5.17

Trace of a Matrix

The trace tr(A) of a matrix A is the sum of its diagonal entries.


13

5.18

Orthogonal Projection of an Inner Product Space

If g1 , , gn is an orthonormal basis of a subspace W of an inner product space V :


projW f = hg1 , f ig1 + + hgn , f ign

5.19

Fourier Analysis
1
fn (t) = a0 + b1 sin(t) + c1 cos(t) + + bn sin(nt) + cn cos(nt)
2

where

Z
1
1
f (t) dt
a0 = hf, i =
2
2
Z
1
bk = hf, sin(kt)i =
f (t) sin(kt) dt

Z
1
f (t) cos(kt) dt
ck = hf, cos(kt)i =

Determinants

6.1

Sarruss Rule

For an nn matrix A, write the first n1 columns to the right of A, then multiply along the
diagonal to get 2n products. Subtract the first n products, then add the second n products
to get the determinant.

6.2

Patterns

A pattern of an n n matrix A is a way to choose n entries of A such that each entry


is in a unique row and column.
The product of a pattern is designated P .
Two entries in a pattern are an inversion if one is located above and to the right of the
other.
The signature of a pattern is defined as sgn P = (1)(inversions in P)
X
det(A) =
(sgn P )(prod P )

14

6.3

Determinants and Gauss-Jordan Elimination

If B is an n n matrix obtained from applying an elementary row operation on an n n


matrix A:
If B is obtained by row division: det(B) = k1 det(A)
If B is obtained by row swap: det(B) = det(A)
If B is obtained by row addition: det(B) = det(A)

6.4

Laplace Expansion

Expansion down the jth column:


det(A) =

n
X

(1)i+j aij det(Aij )

i=1

Expansion along the ith row:


det(A) =

n
X

(1)i+j aij det(Aij )

j=1

6.5

Properties of the Determinant

det(AT ) = det(A)
det(AB) = det(A)det(B)
If A and B are similar, then det(A) = det(B)
det(A1 ) =

7
7.1

1
det(A)

= det(A)1

Eigenvalues and Eigenvectors


Eigenvalues

is an eigenvalues of an n n matrix A if and only if


det(A In ) = 0

7.2

Characteristic Polynomial
det(A In ) = (1)n n + (1)n1 tr(A)n1 + + det(A)

15

7.3

Algebraic Multiplicity

An eigenvalue has algebraic multiplicty k if it is a root of multiplicity k of the characteristic


polynomial.

7.4

Eigenvalues, Determinant, and Trace

For an n n matrix A
det(A) = 1 n =

n
Y

k=1

tr(A) = 1 + + n =

n
X

k=1

7.5

Eigenspace
E = ker(A In ) = {~v in Rn : A~v = ~v }

7.6

Eigenbasis

An eigenbasis for an n n matrix A conists of the eigenvectors of A and forms a basis for
Rn .

7.7

Geometric Multiplicty

The dimension of the eignspace E is the geometric multiplicty of the eigenvalue .

7.8

Diagonalization

1. Find the eigenvalues and corresponding eigenvectors of the matrix A.


2. Let S be the eigenbasis for A and D be a matrix with the eigenvalues of A along the
diagonal.
3. D = S 1 AS and A = SDS 1

7.9

Powers of a Diagonalizable Matrix

If A = SDS 1 , then
At = SDt S 1

16

7.10

Stable Equilibrium

~0 is an asympototically stable equilibrium for the system ~x(t + 1) = A~x(t) if


lim ~x(t) = lim At = ~0

Linear Differential Equations

9.1

Exponential Growth and Decay


dx
= kx,
dt

9.2

x(t) = ekt x0

Linear Dynamical Systems

Discrete model: ~x(t + 1) = B~x(t)


Continuous model:

9.3

d~
x
dt

= A~x

Continuous Dynamical Systems with Real Eigenvalues

d~x
= A~x, ~x(t) = c1 e1 t~v1 + + cn en t~vn
dt
where ~v1 , ~vn forms a real eigenbasis of A with eignvalues 1 , , n

9.4

Continuous Dynamical Systems with Complex Eigenvalues


d~x
= A~x,
dt

pt

~x(t) = e S

cos(qt) sin(qt)
sin(qt) cos(qt)

S 1~x0

where p q are eigenvalues with eigenvectors ~v w


~ and S = [w~
~ v ].

9.5

Strategy for Solving Linear Differential Equations

To solve an nth order linear differential equation with the form T (f ) = g:


1. Find a basis f1 , , fn of ker(T ).
2. Find a particular solution fp .
3. The solutions f are in the form f = c1 f1 + + cn fn + fp .

17

9.6

Eigenfunctions

A smooth function F is an eigenfunction of T if


T (f ) = f

9.7

Characteristic Polynomial of a Linear Differential Operator

For T (f ) = f (n) + an1 f (n1) + + a1 f + a0 f , the characteristic polynomial is defined as:


pT () = n + an1 n1 + + a1 + a0
If et is an eigenfunction of T with eigenvalue pT ():
T (et ) = pT ()et

9.8

Kernel of a Linear Differential Operator

If T is a linear differential operator with characteristic polynomial pT () with roots 1 , , n ,


then the kernel of T is formed by
e1 t , e2 t , , en t

9.9

Characteristic Polynomial with Complex Solution

If the zeros of pT () are p q, then the solutions to its differential equation are of the form
f (t) = ept (c1 cos(qt) + c2 sin(qt))

9.10

First-Order Linear Differential Equations

A differential equation of the form f 0 (t) af (t) = g(t) has a solution of the form
Z
at
f (t) = e
eat g(t) dt

9.11

Strategy for Solving Linear Differential Equations

To solve the nth order linear differential equation T (f ) = g:


1. Find n linearly independent solutions of T (f ) = 0.
Write the characteristic polynomial pT () of T by replacing f (k) with k
Find the solutions 1 , , n of the equation pT () = 0.
If is a solution of pT () = 0, then et is a solution of T (f ) = 0.
18

If is a solution of pt () = 0 with multiplicity m, then et , tet , t2 et , , tm1 et


are the solutions of T (f ) = 0.
If p q are complex solutions of pT () = 0, then ept cos(qt) and ept sin(qt) are real
solutions of T (f ) = 0.
2. If T (f ) is inhomogenous, find one particular solution fp of the equation T (f ) = g.
If g is of the form g(t) = A cos(t) + B sin(t), g(t) = A cos(t), or g(t) =
A sin(t), look for a particular solution of the form fp (t) = P cos(t) + Q sin(t).
If g is of the form g(t) = a0 + a1 t + an tn , look for a particular solution of the
form fp (t) = c0 + c1 t + cn tn
If g is constant, look for a particular solution of the form fp (t) = c.
R
If g is of the form g(t) = f 0 (t) af (t), use the formula f (t) = eat eat g(t) dt.
3. The solutions of T (f ) = g are of the form
f (t) = c1 f1 (t) + c2 f2 (t) + + cn fn (t) + fp (t)
where f1 , , fn are the solutions from Step 1 and fp is the solution from Step 2.

9.12

Linearization of a Nonlinear System





f (x, y)
If (a, b) is an equilibrium point of the system
, such that f (a, b) = 0 and g(a, b) =
g(x, y)
0, then the system is approximated near (a, b) by the Jacobian matrix:
# 
 du  " f
f
(a,
b)
(a,
b)
u
x
y
dt
= g
dv
g
v
(a, b) y (a, b)
dt
x

9.13

The Heat Equation


ft (x, t) = fxx (x, t)

has solutions of the form

X
2
f (x, t) =
bn sin(nx)en t
n=1

9.14

2
where bn =

f (x, 0) sin(nx) dx
0

The Wave Equation


ftt (x, t) = c2 fxx (x, t)

has solutions of the form


f (x, t) =

an sin(nx) cos(nct) +

n=1

where an =

R
0

f (x, 0) sin(nx) dx and bn =

19

bn
sin(nx) sin(nct)
nc

ft (x, 0) sin(nx) dx.

You might also like