You are on page 1of 34

Time-Discrete Signals and Systems

Adaptation Course for Master Studies


Dr.-Ing. Volker Kühn
Institute for Telecommunications and High-Frequency Techniques
Department of Communications Engineering
Room: N2300, Phone: 0421/218-2407
kuehn@ant.uni-bremen.de
Lecture Tutors
Thursday, 08:30 – 10:00 in N1170 Ronald Böhnke
Room: N2380
Dates for exercises will be announced Phone 218-2545
during lectures. boehnke@ant.uni-bremen.de

www.ant.uni-bremen.de/teaching
Volker Kühn Time-Discrete Signals and Systems

Outline
¾ Summary of Laplace and Fourier Transformation
¾ Z-Transformation
Š Time-discrete Signals
Š Properties of Z-Transformation and Convergence
Š System description by Z-transformation

¾ Stochastic Processes
Š Characterization of stochastic processes and random variables
Š Probabilities, densities, distributions, moments, stationary and ergodic processes
Š Central limit theorem
Š Correlation and spectral power density (real and complex signals)
Š System analysis for stochastic input signals (Theorem of Wiener-Khintchine)

¾ Multiple-Input Multiple-Output Systems


Š System Description
Š Linear Algebra (eigenvalues and eigenvectors, pseudo inverse) -- brief
Š Decompositions (QR, unitary matrices, singular value, Cholesky )
Š Statistical representation (multivariate distributions)

Universität
Bremen Outline 2
Volker Kühn Time-Discrete Signals and Systems

Linear Algebra
¾ Notations and definitions
Š Vectors and matrices
Š Elementary operations, Matrix multiplication
Š Determinants
¾ Special Matrices
Š Symmetric, orthogonal, complex, circulant, Toeplitz...
¾ Linear equation systems
Š Gaussian elimination, Cramer’s rule, iterative methods
¾ Matrix factorizations
Š LU, Cholesky, QR (Householder, Givens, Gram-Schmidt)
Š Eigenvalues and eigenvectors, SVD (condition, pseudo-inverse)
¾ Least squares, matrix inversion lemma

Universität
Bremen Multiple-Input Multiple Output Systems 3
Volker Kühn Time-Discrete Signals and Systems

Notations and Definitions (1)


 x1 
¾ Vectors x 
Š Column vectors (preferred): boldface lower case x =  2 
 
 
 xn 
Š Row vectors: underlined boldface lower case x = [ x1 x2 xn ]

¾ Matrices
Š Boldface capital letters
 a1,1 a1,2 a1,n   a1 
a a2,2 a2,n  a 
A=  = [a a an ] =  
2,1 2

  1 2
 
   
a
 m ,1 a m ,2 a m,n   am 
Š Column vectors are just m × 1 matrices
Š Row vectors are just 1× n matrices
Universität
Bremen Multiple-Input Multiple Output Systems 4
Volker Kühn Time-Discrete Signals and Systems

Notations and Definitions (2)


¾ Some special matrices
Š Identity matrix and zero matrix
1 0 0 0 0 0
0 1 0 0 0 0
I=  0= 
   
   
0 0 1 0 0 0

Š Diagonal, lower and upper triangular matrices


 d1 0 0  l1,1 0 0 u1,1 u1,2 u1,n 
0 d 0 l l2,2 0 0 u u2,n 
D= 2  L=
2,1  U=
2,2 
     
     
 0 0 d n  n ,1 n ,2
l l l 
n,n   0 0 un ,n 

Universität
Bremen Multiple-Input Multiple Output Systems 5
Volker Kühn Time-Discrete Signals and Systems

Basic Operations and Properties


¾ Let A, B, C be m × n matrices and α, β be scalars
¾ Addition and scalar multiplication are defined element-wise
 a1,1 + b1,1 a1,n + b1,n   αa1,1 αa1,n 
   
A+B=   αA =  
 am ,1 + bm ,1 am ,n + bm ,n   αam ,1 αam ,n 
¾ Properties
Š A+B=B+A Addition commutative
Š ( A + B ) + C = A + (B + C) Addition associative
Š A+0=A Neutral element of addition
Š A + ( −A ) = 0 Inverse element of addition
Š ( αβ ) A = α (βA ) Scalar multiplication associative
Š 1A = A Neutral element of scalar multiplication
Š ( α + β ) A = αA + β A Scalar multiplication distributive
Š α ( A + B ) = αA + α B Scalar multiplication distributive
Universität
Bremen Multiple-Input Multiple Output Systems 6
Volker Kühn Time-Discrete Signals and Systems

Matrix Multiplication (1)


¾ Let A be a m × n matrix and B be a n × p matrix
 a1,1 a1,n   a1   b1,1 b1, p   b1 
   
A=  = [a1 an ] =   B=  = b1 b p  =  
   
 am ,1 am ,n   am  bn ,1 bn , p  b n 

n
¾ The product C=AB is a m × p matrix with elements ci , j = ∑ ai ,k ⋅ bk , j
k =1
Š “row times column”
¾ Note: number of columns of A has to equal number of rows of B
¾ Equivalent formulations of the matrix multiplication:
 n n

 ∑ a1,k ⋅ bk ,1 ∑ a1, k ⋅ bk, p 
ab a1b p   a1B  n
 k =1 k =1
  1 1 
Ab p  =  = a b
 =  Ab1  ∑
C= =  k k
 n n  a b amb p   am B 
k =1
 ∑ am ,k ⋅ bk ,1 ∑ am ,k ⋅ bk , p   m 1
 k =1 k =1 

Universität
Bremen Multiple-Input Multiple Output Systems 7
Volker Kühn Time-Discrete Signals and Systems

Matrix Multiplication (2)


¾ Special cases
Š m=1, n>1, p=1 (row vector times column vector)
n
c = ab = ∑ ak bk Æ scalar
Inner or scalar product
k =1

Š m=1, n>1, p>1 (row vector times matrix)


n
c = aB = ∑ ak b k Æ row vector
k =1
Matrix-vector products
Š m>1, n>1, p=1 (matrix times column vector)
n
c = Ab = ∑ a k bk Æ column vector
k =1

Š m>1, n=1, p>1 (column vector times row vector)


 a1,1b1,1 a1,1b1, p 
  Outer or dyadic product
C = ab =   Æ matrix
 am ,1b1,1 am ,1 ⋅ b1, p 

Universität
Bremen Multiple-Input Multiple Output Systems 8
Volker Kühn Time-Discrete Signals and Systems

Matrix Multiplication (3)


¾ Properties
Š ( A + B ) C = AC + BC Matrix multiplication distributive
Š A ( B + C ) = AB + AC Matrix multiplication distributive
Š α ( AB ) = ( αA ) B = A ( αB ) Mixed scalar / matrix multiplication associative
Š ( AB ) C = A ( BC ) Matrix multiplication associative

¾ Note: matrix multiplication is not commutative in general


Š Example
2 6  −3 −1 15 6 
A=  B=  C= 
1 7  2 1  1 20 
6 4  −7 −25
AB =  BA =  ⇒ AB ≠ BA
11 6   5 19 

 36 132   36 132 
AC =  CA =  ⇒ AC = CA
 22 146  
 22 146 
Universität
Bremen Multiple-Input Multiple Output Systems 9
Volker Kühn Time-Discrete Signals and Systems

Transpose and Hermitian Transpose


¾ Transpose of a matrix
 a1,1 a1,n   a1   a1,1 am ,1  a1T 
     T  
A=  = [a1 a n ] =   ⇒ AT =   =  a1 amT  =  
 
 am ,1 am ,n   am   a1,n am ,n  aTn 
 
Š Row vectors become column vectors and vice versa
¾ Hermitian transpose of a complex matrix
T
 a1,1
*
a1,* n   a1,1
*
am* ,1  a1H 
     
A = (A ) = 
T
=  =  a1 amH  =  
H * H
 
 am* ,1 a *
  a *
a *
 a nH 
 m ,n   1,n m,n   
Š Transpose of the complex conjugate matrix
¾ Properties
Š (A ) = A Š (A ) = A
T H
T H

Š (A + B) = A + B Š ( A + B) = A + B
T T T H H H
and
Š ( AB ) = BT AT Š ( AB ) = B H A H
T H

Universität
Bremen Multiple-Input Multiple Output Systems 10
Volker Kühn Time-Discrete Signals and Systems

Determinants (1)
¾ Determinant of a 2 × 2 matrix
a1,1 a1,2
det A = = a1,1a2,2 − a2,1a1,2
a2,1 a2,2

¾ Determinant of a 3 × 3 matrix (Sarrus’ rule)


a1,1 a1,2 a1,3 a1,1 a1,2
det A = a2,1 a2,2 a2,3 a2,1 a2,2 = a1,1a2,2 a3,3 + a1,2 a2,3a3,1 + a1,3a2,1a3,2
a3,1 a3,2 a3,3 a3,1 a3,2 − a3,1a2,2 a1,3 − a3,2 a2,3a1,1 − a3,3a2,1a1,2

¾ Determinant of a n × n matrix
Š Let the ( n − 1) × ( n − 1) matrix Ai,j equal A without the i-th row and j-th column
Š Recursive definition of determinant by cofactor expansion
column expansion row expansion
n n
det A = ∑ ( −1) i+ j
ai , j det A i , j det A = ∑ ( −1)i + j ai , j det A i , j
i =1 j =1

Universität
Bremen Multiple-Input Multiple Output Systems 11
Volker Kühn Time-Discrete Signals and Systems

Determinants (2)
¾ Fundamental properties
Š Linearity in columns (rows) αa1 + α′a1′ a 2 = α ⋅ a1 a 2 + α′ ⋅ a1′ a 2
Š Exchanging two columns (rows) a 2 a1 = − a1 a 2
Š Determinant of identity matrix det I = 1
¾ Some additional properties
Š Symmetry in columns and rows det AT = det A
Š Zero column (row) 0 a2 = 0
Š Two equal columns (rows) a1 a1 = 0
Š Multiple of one column (row) αa1 a 2 = α ⋅ a1 a 2
Š Scalar multiplication det ( αA ) = α n det A
Š Adding two columns (rows) a1 + αa 2 a 2 = det A
Š Determinant of matrix product det ( AB ) = det A ⋅ det B

¾ All properties valid for arbitrary n × n matrices

Universität
Bremen Multiple-Input Multiple Output Systems 12
Volker Kühn Time-Discrete Signals and Systems

Determinants (3)
¾ Full expansion of the determinant by recursive application of the definition
pn! n pn! n
det A = ∑ (−1)
p = p1
per{p}
∏a
i =1
i , pi = ∑ (−1)
p = p1
per{p}
∏a
i =1
pi ,i

Š Sum over all n! vectors p containing permutations of 1,2,...,n


Š per{p} is the number of pairwise exchange operations needed for the permutation

¾ Determinant of diagonal or triangular matrix


Š At least one factor is zero for all p ≠ [1 2 n]
T

n n n
det D = ∏ d i ,i det L = ∏ li ,i det U = ∏ ui ,i
i =1 i =1 i =1

¾ Efficient calculation of determinant


Š Determinant unaffected by adding multiples of rows (columns) to rows (columns)
Š Transform A into triangular matrix by elementary row (column) operations

Universität
Bremen Multiple-Input Multiple Output Systems 13
Volker Kühn Time-Discrete Signals and Systems

Linear Equation Systems (1)


¾ System of m linear equations in n unknowns
a1,1 x1 + a1,2 x2 + + a1,n xn = b1
a2,1 x1 + a2,2 x2 + + a2,n xn = b2

am ,1 x1 + am ,2 x2 + + am ,n xn = bm

¾ Matrix-vector notation ¾ Extended coefficient matrix


 a1,1 a1,2 a1,n   x1   b1   a1,1 a1,2 a1,n b1 
a a2,n   x2   b2  a
 2,1
a2,2
 ⋅   =   ⇔ Ax = b a2,2 a2,n b2 
 2,1 
       
 
 am ,1 am ,2 am ,n   xn  bm  
 am ,1 am ,2 am ,n

bm 

¾ Geometric interpretations
Š x is the intersection of m hyperplanes ai x = bi n
Š b is a linear combination of the column vectors ∑xa
i =1
i i =b

Universität
Bremen Multiple-Input Multiple Output Systems 14
Volker Kühn Time-Discrete Signals and Systems

Linear Equation Systems (2)


¾ Illustration for 2 × 2 system (hyperplanes Æ straight lines)

intersecting straight lines parallel straight lines identical straight lines


x2 x2 x2
a2 x = b2 a1x = b1
a1x = b1
a2 x = b2
a2 x = b2 a1x = b1
x1 x1 x1

a1, a2 linearly independent a1, a2 parallel a1, a2, b parallel


x2a 2
a2 b b b
a1 a2
a1 a2
a1
x1a1

unique solution no solution infinite number of solutions

Universität
Bremen Multiple-Input Multiple Output Systems 15
Volker Kühn Time-Discrete Signals and Systems

Linear Equation Systems (3)


¾ Square linear equation system Ax=b with n equations in n unkowns
¾ Cramer’s rule
Š Let Aj equal A with the j-th column replaced by b
A j = a1 a j −1 b a j +1 a n 

Š Then the j-th element of x is


det A j
xj =
det A
n
¾ Proof: substitute b = ∑ xi ai into Aj and use linearity in columns
i =1

¾ Three possibilities
Š det A ≠ 0 Æ unique solution
Š det A = 0 and det A j ≠ 0 for some j Æ no solution
Š det A = 0 and det A j = 0 for all j Æ infinite number of solutions

Universität
Bremen Multiple-Input Multiple Output Systems 16
Volker Kühn Time-Discrete Signals and Systems

Gaussian Elimination (1)


¾ Example: 3 × 3 system
¾ (1) Elimination ¾ (2) Back-substitution
Š Subtracting multiples of rows to create zeros Š Solve for unknowns
Š Transform system into upper triangular form Š Computation in reverse order
a1,1 a1,2 a1,3 b1 x3 = b3(2) / a3,3
(2)

⋅l2 ,1 = a2,1 / a1,1



a2,1 a2,2 a2,3 b2 x2 = ( b2(1) − a2,3
(1)
x3 ) / a2,2
(1)

⋅l3,1 = a3,1 / a1,1 a3,1 a3,2 a3,3 b3 x1 = ( b1 − a1,2 x2 − a1,3 x3 ) / a1,1



a1,1 a1,2 a1,3 b1
(1) (1)
0 a2,2 a2,3 b2(1)
¾ Extension to (1): If a (j ,j j−1) = 0
⋅l3,2 = a3,2
(1)
/ a2(1,2) 0 a3,(1)2 (1)
a3,3 b3(1)
− Š If ak( ,j j−1) ≠ 0 for some k > j
a1,1 a1,2 a1,3 b1 Æ exchange rows
Pivot elements 0 (1)
a2,2 (1)
a2,3 b2(1) Š If ak( ,j j−1) = 0 for all k > j
Reduced systems 0 0 a3,(2)3 b3(2 ) Æ move to next column

Universität
Bremen Multiple-Input Multiple Output Systems 17
Volker Kühn Time-Discrete Signals and Systems

Gaussian Elimination (2)


¾ Special cases
Š All diagonal elements nonzero
• ∗ ∗ ∗
0 • ∗ ∗ Æ unique solution
0 0 • ∗
Š Zero row in coefficient matrix, corresponding right hand side nonzero
∗ ∗ ∗ ∗
0 ∗ ∗ ∗ Æ no solution
0 0 0 •
Š Zero rows in coefficient matrix, corresponding right hand sides zero
• ∗ ∗ ∗ • ∗ ∗ ∗ • ∗ ∗ ∗ 0 0 0 0
0 • ∗ ∗ 0 0 • ∗ 0 0 0 0 0 0 0 0 Æ infinite number of solutions
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
x3 x2 x2 x3 x1 x2 x3 Free parameters
Universität
Bremen Multiple-Input Multiple Output Systems 18
Volker Kühn Time-Discrete Signals and Systems

Gaussian Elimination (3)


¾ General formulation of the algorithm
Š (1) Initialization and elimination Š Pivot search
A (0) := A, b (0) := b n j := index of first nonzero column
for j := 1 to m − 1 do if no n j then r := j − 1, break
find pivot element a (j ,jn−j1) exchange rows, so that a (j ,jn−j1) ≠ 0
for i := j + 1 to m do Š (2) Consistency check
( j −1) ( j −1)
li , j = a i ,n j /a j ,n j if bk( r ) ≠ 0 for some k > r then stop
for k := n j + 1 to n do Š (3) Back-substitution
ai(,kj ) = ai(,kj −1) − li , j ⋅ a (j ,jk−1) choose values for free parameters
end for j := r downto 1 do
bi( j ) = bi( j −1) − li , j ⋅ b(j j −1)  ( j −1) n  1
xn j =  b j − ∑ a j ,k ⋅ xk  ⋅ ( j −1)
( j −1)
  a j ,n
end  k = n j +1  j

end end

Universität
Bremen Multiple-Input Multiple Output Systems 19
Volker Kühn Time-Discrete Signals and Systems

Gaussian Elimination (4)


¾ Result after Elimination Step
 a1,(0)n1 * * * * b1(0)  
  
 0 a2,(1)n2 * * * b2(1)  
r
  
  
 0 0 ar( ,rn−r1) * * br( r −1) 
 0 0 0 0 br(+r 1)  
  
  m − r
 0 (r )  
 0 0 0 bm  
¾ Number of nonzero rows on left hand side: Rank of Matrix A rank {A} = r

¾ Solution exists only if r = m or r < m and br(+r 1) = … = bm( r ) = 0


Š Unique solution if r = n Æ no free parameters
Š Infinite number of solutions if r < n Æ n − r free parameters

Universität
Bremen Multiple-Input Multiple Output Systems 20
Volker Kühn Time-Discrete Signals and Systems

Iterative Solution of Linear Equation Systems


n
¾ Linear equation system Ax = b ⇔ ∑a
j =1
i, j x j = bi for 1 ≤ i ≤ n
¾ Basic idea of iterative algorithms
Š Start with initial estimate of the solution vector x (0)
Š Find improved approximation x ( k +1) from previous approximation x ( k )
Š Stop after convergence
¾ Jacobi
 i −1 n  1
Š Solve row i for unknown xi x( k +1)
i =  bi − ∑ ai , j x j − ∑ ai , j x (jk )  ⋅
(k )

Š Parallel implementation possible  j =1 j =i +1  ai ,i


¾ Gauss-Seidel
Š Use already updated values  i −1 n
(k )  1
x( k +1)
=  bi − ∑ ai , j x j − ∑ ai , j x j  ⋅
( k +1)
Š Better convergence behavior than Jacobi i
 j =1 j = i +1  ai ,i
Š No parallel implementation possible
¾ Conjugate Gradient
Š More complicated implementation, but usually fast convergence

Universität
Bremen Multiple-Input Multiple Output Systems 21
Volker Kühn Time-Discrete Signals and Systems

Inverse Matrix
¾ Inverse A-1 of a square n × n matrix A
A −1A = AA −1 = I
¾ Relation of inverse to linear equation systems
Ax = b ⇔ x = A −1 b
¾ Calculation of the inverse by Gauss-Jordan method
Š n simultaneous linear equation systems [ Ax1 Ax n ] = AX = I ⇔ [ A I ]
Š Forward elimination  A I  ⇒  U L  −1
 
Š Backward elimination  U L  ⇒ I A 
−1 −1
   
¾ Inverse exists only if AX = I has a unique solution (Æ A nonsingular)
Š Condition: rank {A} = n ⇔ det A ≠ 0
¾ Properties
Š ( ) =A
−1
−1
A
( )
−1 −1 −1
Š AB = B A
Š ( ) ( )
−1 H
−1
A H
= A

Universität
Bremen Multiple-Input Multiple Output Systems 22
Volker Kühn Time-Discrete Signals and Systems

LU Decomposition
¾ Every invertible matrix A can be written as the product of a lower triangular
matrix L and an upper triangular matrix U
A = LU
¾ Application: Solution of linear equation system Ax = LUx = b
with constant coefficient matrix for different right hand sides
Š Inversion of triangular matrices easy Æ solve Ly = b and then Ux = y
¾ Calculation of LU decomposition by Gaussian elimination
Š Forward elimination: [ A = LU I ] ⇒  U L−1 
Š L contains factors li , j = ai(, jj−1) / a (j ,j j−1) from the elimination steps
¾ Direct calculation of LU decomposition (example: 3 × 3 matrix)
 a1,1 a1,2 a1,3   1 0 0   r1,1 r1,2 r1,3   r1,1 r1,2 r1,3 
       
a
 2,1 a2,2 a2,3  = l
 2,1 1 0 ⋅
  0 r2,2 r2,3  = l r
 2,1 1,1 l r
2,1 1,2 + r2,2 l r
2,1 1,3 + r2,3 
 a3,1 a3,2 a3,3   l3,1 l3,2 1   0 0 r3,3  l3,1r1,1 l3,1r1,2 + l3,2 r2,2 l3,1r1,3 + l3,2 r2,3 + r3,3 
Š Calculation order: r1,1 → r1,2 → r1,3 → l2,1 → l3,1 → r2,2 → r2,3 → l3,2 → r3,3

Universität
Bremen Multiple-Input Multiple Output Systems 23
Volker Kühn Time-Discrete Signals and Systems

Cholesky Decomposition
¾ Let A be
Š hermitian AH = A ¾ Algorithm
Š positive definite x Ax > 0 ∀x
H
A (0) := A
¾ Then A is fully characterized by a for k := 1 to n do
lower triangular matrix L lk ,k = ak( k,k−1)
Š Cholesky decomposition A = LLH
for i := k + 1 to n do
¾ Similar to LU decomposition
li ,k = ai(,kk−1) / lk*,k
Š But computational complexity reduced by factor 2
for j := k + 1 to i do
¾ Example: 3 × 3 matrix
ai(,kj) = ai(,kj−1) − li ,k ⋅ l *j ,k
l 2 *
l1,1l2,1 *
l1,1l3,1 
 a1,1 a1,2 a1,3   1,1
 end
   * 2 2 
 a a a =
  l l l + l l l *
+ l l *
end
2,1 2,2 2,3 2,1 1,1 2,1 2,2 2,1 3,1 2,2 3,2

 a3,1 a3,2 a3,3   * 2 2 2 end
l l l l *
+ l
 3,1 1,1 3,1 2,1 3,2 2,2 3,1 l *
l + l + l3,3 
3,2

Š Calculation order: l1,1 → l2,1 → l3,1 → l2,2 → l3,2 → l3,3

Universität
Bremen Multiple-Input Multiple Output Systems 24
Volker Kühn Time-Discrete Signals and Systems

QR Decomposition (1)
¾ Every m × n matrix A can be written as A = QR where
Š Q is a m × n matrix with orthonormal columns
1 for i = j
qi q j = 
H
⇔ QH Q = I
0 for i ≠ j
Š R is an upper triangular n × n matrix
¾ Columns of A are represented in the orthonormal base defined by Q
k
a k = ∑ ri ,k q i
i =1

¾ Illustration for the m × 2 case


q2
a 2 = r1,2q1 + r2,2q 2
r2,2q 2

q1 r1,2q1 a1 = r1,1q1
Universität
Bremen Multiple-Input Multiple Output Systems 25
Volker Kühn Time-Discrete Signals and Systems

QR Decomposition (2)
¾ Calculation of QR decomposition by modified Gram-Schmidt algorithm
Š Calculate length (Euclidean norm) of a1 Æ r1,1
Š Normalize a1 to have unit length Æ q1
Š Projection of a2,...,an onto q1 Æ r1,j Q (0) := A
Š Subtract components of a2,...,an parallel to q1 Æ q (1)
j
for k := 1 to n do
Š Continue with next column rk ,k = q (kk −1)
¾ Q is computed from left to right q k = q (kk −1) / rk ,k
¾ R is computed from top to bottom for i := k + 1 to n do
¾ Illustration for the m × 2 case rk ,i = q kH q (jk −1)
q2
a 2 = r1,2q1 + r2,2q 2 q (jk ) = q (jk −1) − rk ,i q k
r2,2q 2
end
end

q1 r1,2q1 a1 = r1,1q1
Universität
Bremen Multiple-Input Multiple Output Systems 26
Volker Kühn Time-Discrete Signals and Systems

QR Decomposition (3)
¾ Householder reflections: reflect x onto y by multiplication with unitary matrix
x−y xH u
H = I − (1 + w) ⋅ uu H
with u = and w = H x
x−y u x u uu H x
Š Special case: y = [|| x || 0] Æ create zeros in a vector
T

uu H y
y = Hx
¾ Application to QR decomposition of m × n matrix A
R := A, Q := I m Initialization
for k := 1 to n do Loop through all columns
x = R ( k : m, k )

y = [|| x || 0]
T
 Create zeros below the main
 diagonal in k-th column of R
calculate u, w, H 
R ( k : m, k : n) = H ⋅ R ( k : m, k : n) 
Q(:, k : m) = Q(:, k : m) ⋅ H H Update unitary matrix Q
end

Universität
Bremen Multiple-Input Multiple Output Systems 27
Volker Kühn Time-Discrete Signals and Systems

QR Decomposition (4)
¾ Givens rotations
gi*,i = g k ,k = cos θ = c
Š Let G (i, k , θ) equal an identity matrix except for
Š G (i, k , θ) is unitary and describes a rotation − gi*,k = g k ,i = sin θ = s

2 2 2 2
¾ Special choices for c and s: c = xi xi + xk and s = − xk xi + xk
¾ Linear transformation
2 2
y = G (i, k , θ) ⋅ x ⇒ yi = xi + xk , yk = 0, y j = x j ∀j ≠ i, k
Š Givens rotation can create zero while changing only one other element
¾ Example
* * * * * *  * * *  * * *
A = * * * 
G (2,3,θ1 )
→ * * * 
G (1,2,θ2 )
→  0 * * 
G (2,3,θ3 )
→  0 * * = R
       
* * * 0 * *  0 * *  0 0 *

⇒ R = G (2,3, θ3 )G (1, 2, θ2 )G (2,3, θ1 ) A Q = G (2,3, θ1 ) H G (1, 2, θ2 ) H G (2,3, θ3 ) H

Universität
Bremen Multiple-Input Multiple Output Systems 28
Volker Kühn Time-Discrete Signals and Systems

Eigenvalues and Eigenvectors (1)


¾ Special eigenvalue problem for arbitrary n × n matrices
Ax = λx ⇔ ( A − λI ) x = 0
¾ Condition for existence of nontrivial solutions x ≠ 0
Š Characteristic polynomial of degree n has to be zero
pA (λ ) = det ( A − λI ) = ( λ − λ1 ) 1 ( λ − λ l ) l = 0
k k

Š Zeros λ i of polynomial are the eigenvalues of A with algebraic multiplicity ki


¾ Eigenvectors
Š Solve linear equation systems ( A − λ i I ) xi = 0 for all eigenvalues
Š Dimension of solution space is called geometric multiplicity gi (1 ≤ gi ≤ ki )
Š Eigenvectors belonging to different eigenvalues are linearly independent
¾ Diagonalization of a matrix A
Š Define the matrix X = [ x1 x n ] and the diagonal matrix Λ = diag(λ1 ,…, λ n )
AX = XΛ ⇒ X −1AX = Λ
Š Only possible for linearly independent eigenvectors

Universität
Bremen Multiple-Input Multiple Output Systems 29
Volker Kühn Time-Discrete Signals and Systems

Eigenvalues and Eigenvectors (2)


¾ Some useful general properties
AT → λi n
det A = ∏ λ i
A H
→ λ *
i
i =1
n
αA → αλ i , xi trace A = ∑ λ i
Am → λ im , xi i =1

A + β I → λ i + β, xi A invertible ⇔ all λ i ≠ 0
X −1AX → λ i , X −1xi A positive definite ⇔ all λ i > 0
¾ Properties for hermitian matrices
Š All eigenvalues are real
Š Eigenvectors belonging to different eigenvalues are orthogonal
Š Algebraic and geometric multiplicities are identical
Š Consequence: all eigenvectors can be chosen to be mutually orthogonal
Š A hermitian matrix A can be diagonalized by a unitary matrix V
V H AV = Λ ⇔ A = VΛV H Eigenvalue decomposition

Universität
Bremen Multiple-Input Multiple Output Systems 30
Volker Kühn Time-Discrete Signals and Systems

Singular Value Decomposition (SVD) (1)


¾ Every m × n matrix A of rank r can be written as
 Σ0 0 H
A = UΣV = U 
H
 V with the matrix of singular values Σ 0 = diag(σ1 ,… , σ r )
 0 0
Š Singular values σi of A = square roots of nonzero eigenvalues of A H A or AA
H

Š Unitary m × m matrix U contains left singular vectors of A = eigenvectors of AA H


Š Unitary n × n matrix V contains right singular vectors of A = eigenvectors of A A
H

¾ Verification with eigenvalue decomposition


 Σ 02 0  H  Σ 02 0  H
A A = VΣ U UΣV = V 
H H H H
V AA = UΣV VΣU = U 
H H H
U
 0 0  0 0
¾ Four fundamental subspaces: the vectors
Š u1,...,ur span the column space of A 
 orthogonal
Š ur+1,...,um span the left nullspace of A 
Š v1,...,vr span the row space of A 
 orthogonal
Š vr+1,...,vn span the right nullspace of A 

Universität
Bremen Multiple-Input Multiple Output Systems 31
Volker Kühn Time-Discrete Signals and Systems

Singular Value Decomposition (SVD) (2)


¾ Illustration of the fundamental subspaces
Š Consider linear mapping x → Ax with orthogonal decomposition x = x r + x n

co
ce

lu
pa

m
ns
ws

pa
ro

ce
xr
Ax = Ax r

x Ax n = 0
0
rig

ce
ht

pa
nu

ls
xn

ul
lls

tn
pa

f
le
ce

Universität
Bremen Multiple-Input Multiple Output Systems 32
Volker Kühn Time-Discrete Signals and Systems

Pseudo Inverse and Least Squares Solution (1)


¾ Inverse A-1 exists only for square matrices with full rank
¾ Generalization: (Moore-Penrose) pseudo inverse A+
 Σ0 0 H  Σ 0−1 0  H
A = UΣV = U 
H
 V ⇒ + + H
A = VΣ U = V  U
 0 0   0 0
¾ Special cases for full rank matrices
 A H ( AA H )−1 for rank {A} = m

A+ = 
( A A ) A H for rank {A} = n
H −1


¾ Application: Least squares solution of a linear equation system
Š Problem: find vector x that minimizes the euclidean distance between Ax and b
Š Solution: project b onto the column space of A and solve Ax=bc
Š If no unique solution exists Æ take solution vector with shortest length
min Ax − b ⇒ x = A +b
x

Universität
Bremen Multiple-Input Multiple Output Systems 33
Volker Kühn Time-Discrete Signals and Systems

Pseudo Inverse and Least Squares Solution (2)


¾ Illustration of the least squares solution of a linear equation system

co
ce

lu
pa

m
x = A +b

ns
ws

pa
ro

ce
b c = Ax
b
0
A +bn = 0
rig

ce
bn
ht

pa
nu

ls
ul
lls

tn
pa

f
le
ce

Universität
Bremen Multiple-Input Multiple Output Systems 34

You might also like