You are on page 1of 2

Basis for radrommet: Use Gauss-Jordan to reduce to rref.

The row vectors with the leading 1s form a basis for the row space of R. Basis for a subspace: the minimum set of vectors that spans the space that it is a basis of. Example: s={| | | |}, the span of S is: everything in R2 can be made by these. | | | |=| |, 2c1 0 7c2 = x1, 3x1 + 0 =

A square matrix A is said to be symmetric if A = A . Solving system of equations: 3x+7y=7, -6x+6y=6. | | | | =| | => AX=b. To solve, multiply by A-1. A-1Ax = A-1B => IX = -1 -1 A B => X = A B. The solution often used to solve equations with two unknowns is better when you have a two by two matrix, but -1 with a 3 by 3 this will be better. A = 1/|A|*| 30. A = 1/30| | |=1/30|
-1

say we have couple of vectors. V1 = | |. and v2 = | |. The basis (or the set) is B = {v1,v2}, this is the basis for R^2. If we do RREF on this we will get the identity matrix. 3v1 + 2v2 = | | = a-vector. [a-vector]B (a vector with the respect to the basis B) is the weights | |. These weights are called coordinates. (the coordinates with respect to

x2 -> c1 = x2/3 -> 2/3(x2) + 7c2 = x1 -> 7c2 = x1- 2/3(x2) > c2 = x1/7 2/21(x2). Give any two numbers, and do this. This will never break, these two formulas will always work so that it will find a linear combination. But are these linearly independent? . | | | | | |, x1/x2 shoudl be 0, c1 and c2 should be zero, only solution is if both are zero and therefore are linearly independent so we can definitely say that S is a basis for R2. Another example: | |, find the basis for

|. |A|= 3*6 +6*2 = |. X= | | | = 1/30| |=

| |. They will intersect at| |= | |. In short: use Gauss-Jordan and insert into the first equations! To know if the system is consistent put bs in the back of the equation you are doing rref on. And if the bottom line is all zeros, the b1+b1+b2 = 0 you get at that line means that it is. Consistent means that it has one or more solutions. Linear Dependence: The vectors are linear dependent if the vectors can be represented with the help of one (or more) other(s). Else: they are independent. If is linear dependent iff for some cis is it least one is not zero. {| |,| |, | |} -> { | |, | |, | |} -> 2c1+3c2+c3 = 0, c1 + 2c2 + 2c3 = 0. /Since there are three vectors it is almost given that they are linear dependent.). Can just have the example of setting c3 to -1, solve this, and we have the equation that is equal to zero! And this is equal to linearly dependent! The cross product is only defined in R3. In the dot product we will get a number, but here we will get another vector. Example: a= | b=| |, | |, | row two is sort of the |, the standard basis the standard coordinates).

the row space of this: and what is the rank? Use GaussJordan on this, and we will get the result | |, In linear algebra, the kernel or null space (also nullspace) of a matrix A is the set of all vectors x for which Ax = 0. To find the basis for the null space we need to RREF it, then determine which of the dependent variables that are free (that does not have a leading 1). But a one at the bottom (the 1s cant be at same level, then + a zero on the last one) and these are the basis for the null space of A. Range/image: If f associated the element b with the element a, then we write b=f(a) and say then b is the image of a under f or that f(a) is the value of f at a. the set A is called the domain of f and the set B the codomain off. the subset of the codomain that consists of all images of points in the domain is called the range of f. If T: V->W is a linear transformation, then the set of vectors in V that maps into 0, is called the kernel of T and is denoted by ker(T). the set of all vectors in W that are images under T of at least one vector in V is called the range of T and is denoted by R(T). The kernel of TA. is the null space of A and the range of TA is the column space of A. Image of a subset under a transformation: x0 = | =| | | |, ) do? The result is orthogonal to a and b. (Orthogonal if a dot b = 0; | | | | Trace of a matrix: This is the diagonal of a matrix. = a1a2b3 a1b3b2 + a2a3b1 a2a1b3+ a3a1b2 a3a2b1 = 0, do the same with | |, f(| |) Orthogonal: V some subspace of Rn. Orthogonal compliment of V. We have some subspace with vectors, if I can find another set where every member is orthogonal to every member of the subspace in question, then the set of those vectors it the orthogonal compliment of V (written V upsidedown T !).The nullspace (N(A)) is the orthogonal compliment of the rowspace of A. An orthogonal matrix is a matrix where A^-1 = A^T. A coordinate vector is an explicit representation of an Euclidean vector in an abstract vector space as an ordered list of numbers or, equivalently, as an element of the coordinate space. They allow calculations with abstract objects to be transformed into calculations with blocks of numbers (matrices, column vectors, row vectors). Inverse: If A and B are square matrixes of same size, and AB=BA=I, then A is said to be invertible and B is called an inverse of A. every elementary matrix is invertible, and the inverse is also an elementary matrix. The matrix A = | bc) | | | Is invertible iff ad-bc != 0, in which
-1

and then the basis is |

|.

The Row space is not affected by the elementary row operations. This makes it possible to use row reduction to fins a basis for the row space. Once the matrix is in row echelon form the nonzero rows are the basis for the row space. This algorithm can be used in general to find a basis for the span of a set of vectors. If the matrix is further simplified to RREF, then the resulting basis is uniquely determined by the row space. Ranken til matrisen: The rank of a matrix A is the maximum number of linerary independent row vectors of A. The dim(C(A))/rank is the number of lineary independent column vectors, pivot columns in the rref matrix. If we look at the last example in the previous box, the rank of that matrix is three. Basis for the column space: the columns for A span the column space, but they may not form a basis if the column vectors are not linearly independent. To find it,RREF the matrix, and see which of the columns that are not linearly independent (has leading 1s!). Then the _columns_ of the _original_ matrix are the basis for the column space. In the first box (the last example) we have RREF a matrix, and we can see that column 1,2 and 4 have leading 1s, and therefore the columns of the _original_ matrix are the basis for the column space. Nullspace/Nullity: First get the matrix in rref. If this has a row of five numbers, this matrix should be multiplied by x1x5 columns. This is If we have two columns in the original equatium, this is equal to two equations. And then set the pivot-elements on the left side of the equation. The nullity of the example we have used before is 5-3 = 2. N(A) = N(rref(A)) = span(v1, v2, v3). Those three are L.I, and therefore a basis for the nullspace of the matrix B. The Nullity is the number of free variables in RREF of A! Or the number of non-pivot elements in the RREF of A. As you cansee below, this is equal to three

|, x1

negative of the first row. Another example: | | |. = |

|, x2 = | || || ||

| Transformation: T(x-vector) =

|. The transformation of L(not?) is T(x0) = | | | |. T(x1) = | || | | |. T(x2)

|. What does this

| =|

| |. The transformation of the whole

shape (triangle) is the image of S under T.

Vector transformations: f8x1,x2,x3) = (x1 + 2x2, 3x3), f: R^3 -> R^2, f(| | |) | |. f(| |) | |,

| math:(2+2*4, 3*1). The function maps the

graphs to each other. Transformation: is a function operating on vectors! T is the name instead of using f (like in f(x)). Transforming one image to another. When they ask for the transformation use what you would have done with a function, like above. Linear transformations: is a transformation (function!), T: R^n -> R^m. it is only a LT if both a and b-vector are in R^n. If T(a-vector + b-vector = T(a-vector) + T(b-vector), T(c*a-vector) = c*T(a-vector). T(x1,x2) = (x1+x2, 3x3) ,T(| a-vector = | | T(a+b) = vector) = | |, = | |, b-vector = | |) = | |. Are these LT?

|, a-vector + b-vector =

|, But what is the transformation of this vector? =| |, T(b-vector) = | |, so the T(a|, |. We can |, and set c

Is the equation ax=b consistent: it has one or more zeros! Set [a|b], and RREF it. The numbers are the leading 1s, and the remaining numbers. When this is solved the x4s can be assigned any arbitrary value. Coordinates with respect to a basis: V is a subspace of Rn. B = {v1,v2,,vn}. A-vector is an element in V -> avector = c1v1 + c2v2 + ckvk. The Cs are the coordinates with respect to B. [a-vector]b = | |. Lets

T(a-vector) + T(b-vector) = | | |, the transformation of this is|

see that we have met out first criteria. C*a-vector = on the outside. As we can see that this is the same as the transformation of a-vector. We have met out second condition. Since we meet both condition this is a LA! This is something that _dont_ work: T: R^2 -> R^2, T(| | c^2| |. T(c* a-vector) = T(| |=| ) |== | |) = |=

case the inverse is given by the formula A = (1/(ad-

To find the inverse of an invertible algorithm, reduce the matrix while having the identity matrix on the side.

| = C^2*T(a-vector). We can see that this

conflicts with one of the statements; therefore it is not a linear transformation! Transpose: when A is a square matrix, the transpose A^T can be obtained by interchanging entries that are symmetrically positioned around the main diagonal. Elementairy matrix It can be obtained from the nxn identity matrix In by performing a _SINGLE_ elementary row operation. Every elementary matric is invertible, and the inverse is also a elementary matrix. A triangular matrix is invertible iff its diagonal entries are all nonzero. The inverse of an invertible lower triangular matrix is lower triangular, and vice versa. If A is a square matrix then the minor entry is defined to be the determinant of the submatrix that remains after the ith row and the jth column is deleted form A. Example: | | -> | | -> 5*8 6*4 = 16, cofactor is

16. The number obtained by multiplying the entries in any row or column with the corresponding cofactors and adding the resulting product is called the determinant of A, and the sums themselves are the cofactor expancions of A. Example: det(A) = | + 5| | = 3| |-(-2) | |

| = 3(-4) (-2)(-2) + 5(3) = -1.

If A is a square matric, if A has a row of zeros or a column of zeros, then det(A) = 0. If it is a square thehn det(A) = det(A^T). If B is the matrix that results when two rows are or two columns of a are interchanged then det(B) = det(A). If we have a square matrix, they are almost an identity matrix, exept for one number that is multiplied by a nonzero number, then the determinant is equal to that number. If the identity matrix can be achieved by interchanging two rows, then the determinant is equal to -1. If the identity matrix can be achieved by multiplying one row and adding it to another, then det(A) = 1. Khan Academy: 3x3matrix, for a 2x2determinant A = | | , The det(B) is ad-bc. B^-1 = 1/ad-bc| |, Bi is invertible is the determninant is NOT zero. For a 3x3matrix: A = | +| |, det (A) = 1| | - 2| |

| = 1(-1*1 0*3) 2(2*1 -4*3) +4(2*0-(-1*4)) =

35. Now we also know that A is invertible! How to solve these determinants in general? Use gauss on the matrix, if you exchange two rows but (-1) on the outside of the matrix, and do this for every time you exchange. If you scale a row down, but the value you scale it with on the outside also. When this is done, multiply the diagonal with the numbers you put on the outside, and there you have the determinant!

Eigenvalue: Lambda is an eigenvalue of A iff det(Lambda*I A) = 0, for a 2x2 matrix. Basis for a plane:

You might also like