You are on page 1of 46

N968200

( )-( )

1. (Introduction to Abstract Algebra)


2. (Tensor Analysis)
3. (Orthogonal Function Expansion)
4. (Green's Function)
5. (Calculus of Variation)
6. (Perturbation Theory)

Reference:
1.

Birkhoff, G., MacLane, S., A Survey of Modern Algebra, 2nd ed, The Macmillan Co, New York, 1975.

2 . , - , , 1989.
3.

Arangno, D. C., Schaums Outline of Theory and Problems of Abstract Algebra, McGraw-Hill Inc, 1999.

4.

Deskins, W. E., Abstract Algebra, The Macmillan Co, New York, 1964.

5.

ONan, M., Enderton, H., Linear Algebra, 3rd ed, Harcourt Brace Jovanovich Inc, 1990.

6.

Hoffman, K., Kunze, R., Linear Algebra, 2nd ed, The Southeast Book Co, New Jersey, 1971.

7.

McCoy, N. H., Fundamentals of Abstract Algebra, expanded version, Allyn & Bacon Inc, Boston, 1972.

8.

Hildebrand, F. B., Methods of Applied Mathematics, 2nd ed, Prentice-Hall Inc, New Jersey, 1972..

9.

Burton, D. M., An Introduction to Abstract Mathematical Systems, Addison-Wesley, Massachusetts, 1965.

10. Grossman, S. I., Derrick, W. R., Advanced Engineering Mathematics, Happer & Row, 1988.
11. Hilbert, D., Courant, R., Methods of Mathematical Physics, vol(1), , , .
12. Jeffrey, A., Advanced Engineering Mathematics, Harcourt, 2002.
13. Arfken, G. B., Weber, H. J., Mathematical Methods for Physicists, 5th ed, Harcourt, 2001.
14. Morse, F. B., Morse, F. H., Feshbach, H., Methods of Theoretical Physics, McGraw-Hill College, 1953

David Hilbert

The finiteness theorem


Axiomatization of geometry
The 23 Problems
Formalism

~ from Wikipedia
Born January 23, 1862 Wehlau, East Prussia
Died February 14, 1943 Gttingen, Germany
Residence
Germany
Nationality
German
Field Mathematician
Erds Number
4
Institution University of Knigsberg and Gttingen University
Alma Mater University of Knigsberg
Doctoral Advisor Ferdinand von Lindemann
Doctoral Students Otto Blumenthal
Richard Courant
Max Dehn
Erich Hecke
Hellmuth Kneser
Robert Knig
Erhard Schmidt
Hugo Steinhaus
Emanuel Lasker
Hermann Weyl
Ernst Zermelo
Known for Hilbert's basis theorem
Hilbert's axioms
Hilbert's problems
Hilbert's program
Einstein-Hilbert action
Hilbert space
Societies Foreign member of the Royal Society
Spouse
Kthe Jerosch (1864-1945, m. 1892)
Children Franz Hilbert (1893-1969)
Handedness Right handed

Philip M. Morse

Founding ORSA President (1952)


B.S. Physics, 1926, Case Institute;
Ph.D. Physics, 1929, Princeton
University.
Faculty member at MIT, 1931-1969.

Methods of Operations Research


Queues, Inventories, and Maintenance
Operations research is an
Library Effectiveness
Quantum Mechanics
applied science utilizing all known
Methods of Theoretical Physics
scientific techniques as tools in
Vibration and Sound
solving a specific problem.
Theoretical Acoustics
Thermal Physics
Handbook of Mathematical Functions, with Formulas,
Graphs, and Mathematical Tables

Francis B. Hildebrand

George Arfken

Introduction to Abstract Algebra

Preliminary notions
Systems with a single operation
Mathematical systems with two operations
Matrix theory: an algebraic view


ab ba

R, , R, , R,, ,V R

a a 1 a 1 a e

ae ea a

V ,

a b c
a b c

R,

V R

R,,

(V ,), ( R,,),

R,


a b c a b a c
b c a b a c a

Groupoid

A goupoid R, must satisfy


R is closed under the rule of combination
a , b R a b R

Ex. Consider the operation defined on the set S=


{1,2,3} by the operation table below.
1

1 2 3

1 3 2

2 1 3

3 2 1

From the table, we see

2 (1 3)=2 3=2 but (2 1) 3=3 3=1


The associative law fails to hold in this groupoid(S, )

Semigroup
A semigroup is a groupoid whose
operation satisfies the associative law.
a , b R a b R (groupoid)
a , b, c R a b c a b c

#
Ex. If the operation is defined on R by a b = max{ a,
b },that is a b is the larger of the elements a and b, or
either one if a=b.

a(b c) = max{ a, b, c } = (a b) c
#
(R
, ) to be a semigroup
that shows

If a, b, c, d R and (R, ) is a semigroup, then


(a b) (c d) a ((b c) d)

proof.

a ((b c) d) a (b (c d))
a (b x)
(a b) x
(a b) (c d)

denoted (c d) by x

Monoid

A semigroup R, having an identity element e


for the operation is called a monoid.
a , b R a b R
(groupoid)
a , b, c R a b c a b c (semigroup)
a Re R a e e a a

Ex. Both the semigroups (SU ,)and (SU ,)are instances of


monoids
A A A

for each A U
The empty set is the identity element for the union
operation.
for each A U
A U U A A
The universal set is the identity element for the
intersection operation.

Group

A monoid R, which each element of R has


an inverse is called a group
(groupoid)
a , b R a b R
a , b, c R a b c a b c
(semigroup)
(monoid)
a Re R a e e a a
a Ra 1 R a a -1 a -1 a e

If R, is a group and a, b R,then (a b)-1 b -1 a -1


Proof. all we need to show is that
(a b) (b -1 a -1 ) (b -1 a -1 ) (a b) e
from the uniqueness of the inverse of a b

we would conclude (a b)-1 b -1 a -1


(a b) (b -1 a -1 ) a ((b b -1 ) a -1 )
a (e a -1 )
a a -1
e

a similar argument establishes that (b -1 a -1 ) (a b) e

Commutative
a, b R a b b a
a, b R a b Rgroupoid

Commutative
groupoid

a, b, c R a (b c) (a b) c semigroup

Commutative
semigroup

a Re R a e e a a monoid

Commutative monoid

a Ra 1 R a a -1 a -1 a e group

Commutative group

Ex. consider the set of number


S {a b 2 | a,b and
Z} the
operation of ordinary
multiplication, and Z represents
integer.
1. Closure:
a, b, c, d Z (a b 2 ) (c d 2 ) (ac 2bd) (ad bc) 2 S
2. Associate property
a, b, c, d, e, f Z

(a b 2 ) (c d 2 ) (e f 2 ) (a b 2 ) (c d 2 ) (e f 2 )

3. Identity element
1 1 0 2 property
4. Commutative
a, b, c, d Z
(a b 2 ) (cis ad commutative
2 ) (c d 2 )monoid.
(a b 2 )

(S, )

Ring

A ring (R, , ) is a nonempty set R with two binary


operations and on R such that
1. (R, ) is a commutative group
a, b R a b b a
a, b R a b Rgroupoid

a, b, c R a (b c) (a b) c semigroup
a Re R a e e a a monoid
a Ra 1 R a a -1 a -1 a e group

2. (R, ) is a semigroup

a,b R a b Rgroupoid
a,b, c R a (b c) (a b) c semigroup

3. The two operations are related by the distributive


laws a,b, c R a (b c) (a b) (a c)
(b c) a (b a) (c a)

A ring (R, , ) consists of a nonempty set R and two


operations, called addition and multiplication and denoted
by and , respectively, satisfying the requirements:

1.
2.
3.
4.
5.
6.
7.
8.

R is closed under addition


Commutative
Associative
Identity element 0
Inverse
R is closed under multiplication
Associate
Distributive law

a, b, c R 1. a b R
2. a b b a
3. a (b c) (a b) c
4. 0 R a 0 0 a a
5. a -1 Rgroup a (-a) 0
6. a b R
7. a (b c) (a b) c
8. a (b c) (a b) (a c) (b c) a (b a) (c a)

Monoid Ring
A monoid ring (R, , ) is a ring with identity that is a
semigroup with identity
a, b, c R a b b a
a b Rgroupoid
a (b c) (a b) c semigroup
e R a e e a a monoid
a 1 R a a -1 a -1 a e group
a b Rgroupoid

Ring

a (b c) (a b) c semigroup
a (b c) (a b) (a c)
(b c) a (b a) (c a)
e R a e e a a monoid

Monoid ring

Ring (R, , ) with commutative property


a,b,c R a b R
ab ba
a (b c) (a b) c
e R a e e a a
a -1 R a a -1 a -1 a e
a b R
a (b c) (a b) c
a (b c) (a b) (a c) (b c) a (b a) (c a)
a b ba
a, b,c R a b b a
a bR
a (b c) (a b) c
e R a e e a a
a 1 R a a -1 a -1 a e
a b R
a (b c) (a b) c
a (b c) (a b) (a c)
(b c) a (b a) (c a)
e R a e e a a

Commutative

Commutative monoid Ring

Subring

The triple (S, , ) is a subring of the ring (R, , )


1. S is a nonempty subset of R
2. (S, ) is a subgroup of (R, )
3. S is closed under multiplication

S R
a,b,c S a b b a
a bS
a (b c) (a b) c
e S a e e a a
a - 1 S a a - 1 a - 1 a e
a bS

The minimal set of conditions for determining subrings


Let (R, , ) be ring and S R Then the triple (S, , )
is a subring of (R, , ) if and only if
a, b S a - b S
1. Closed under differences
2. Closed under multiplication
a bS

Ex. Let S {a b 3 | a, b Z} then (S, , ) is a subring of


(R # , , ), R # is a set of real numbers
, since
a, b, c, d Z, Z is the set of integers
(a b 3 ) - (c d 3 ) (a - c) (b - d) 3 S
(a b 3 ) (c d 3 ) (ac 3bd) (bc ad) 3 S

This shows that S is closed under both differences and


products.

Field
A field (F, , ) is a commutative monoid ring in
which each nonzero element has an inverse under

Definition of Field
A field is a mathematic al system ( F ,,) consisting of nonempty
set F and on F , called addition and multiplica tion, such that
(1) ( F , ) is a commutativ e group, with identity 0;
(2) ( F {0},) is a commutativ e group, with identity 1;
(3) For each triple of elements a, b, c F , a (b c) a b a c

Vector
An n-component, or n-dimensional, vector is an n
tuple of real numbers written either in a row or in a
column.
Row vector x1 , x2 ,, xn
Column vector x1
x2


xn

x k R # called the components of the vector

n is the dimension of the vector

Vector space

A vector space( or linear space) ((V, ),(F, , ), )orV(F)


over the field F consists of the following:
1.

A commutative group (V, ) whose elements are called


vectors.
a,b V a b b a
a,b V a b Vgroupoid
a,b,c V a (b c) (a b) csemigroup
a Ve V a e e a amonoid
a Va 1 V a a -1 a -1 a egroup

2.

A field (F, , ) whose elements are called scalars.


a, b,c F a b b a
a bF
a (b c) (a b) c
e F a e e a a
a 1 F a a - 1 a - 1 a e
ab F
a (b c) (a b) c
a (b c) (a b) (a c)
(b c) a (b a) (c a)
e F a e e a a
a 1 F a a - 1 a - 1 a e

3.

An operation of scalar multiplication connecting the


group and field which satisfies the properties
(a) c F and x V , there is defined an element c ox V ;

V is closed under left multiplication by scalars


(b) (c 1 c 2) ox (c 1 ox ) (c 2 ox );
(c) (c 1 oc 2) ox c 1 o(c 2 ox ),
(d) c o( x y ) (c ox ) (c oy );
(e) 1ox x, where 1 is the field identity element.

Ex:
Let the commutativ e group be ( Mm n,), where Mm n is the set

of all m n matrices and is the operation of matrix addition.


For c R # and (aij ) Mm n, define scalar multiplica tion by
c(aij ) (caij ) Mm n

Vector Space

When m = n, we denote the particular vector space by Mn(R#)

Subspace
Let V(F) be a vector space over the field F
W V,W
W(F) is a subspace of V(F)
The minimum conditions that W(F) must satisfy to be a subspace are:
(1) (W , ) is a subgroup of (V , );

(2) W is closed under scalar multiplica tion.

x, y W implies x y W ;
x W and c F imply cx W .

If V(F) and V(F) are vector spaces over the same field, then the
mapping f : V V is said to be operation-preserving if
f ( x y ) f ( x ) f ( y ),
f (cx) cf ( x),

f preserves

pair of elements x, y V and c F .

V(F) and V(F) are algebraically equivalent whenever there exists a


one-to-one operation-preserving function from V onto V

Linear Transformations
Let V and W be vector spaces. A linear transformation from V into
W is a function T from the set V into W with the following two
properties:
(i) T ( x y ) T ( x) T ( y ), x, y V .
(ii) T (x) T ( x), x V and scalars .

T(x)
W

V
T is function from V to W, {T ( x) | x V }

Let V and W be vector spaces over the field F and let T be a


linear transformation from V into W.
If V is finite-dimensional, the rank of T is the dimension of the range of T
and the nullity of T is the dimension of the null space of T.
The null space (kernal) of T is the set of all vectors x in V such that T(x) = 0

ker T { x V | T ( x ) 0}

ker T

ran T

The Algebra of Linear Transformations


Let T : U V and S : V W be linear transformations, with U, V, and
W vector spaces.

( S T )( x ) S (T ( x )), for x in U

The composition of S and V

if x1 and x2 are vectors in U, then


(S oT )( x1 x2 ) S(T ( x1 x2 ))

(by definition of S oT )

S(T ( x1) T ( x2 ))

(by linearity of T )

S(T ( x1 )) S(T ( x2 ))

(by linearity of S )

(S oT )( x1) (S oT )( x2 ) (by definition of S oT )


Similarly, we have, with x in U and a scalar,
(S oT )( x ) S(T ( x )) (by definition of S oT )
S(T ( x ))
(by linearity of T )
S(T ( x ))
(by linearity of S )
(S oT )( x ) (by definition of S oT )

Representation of Linear Transformations by Matrices

Let V be an n-dimensional vector space over the field F. T is a linear


transformation, and 1, 2,,n are ordered bases for V. If

T (1 ) a111 a21 2 an1 n


T ( 2 ) a121 a22 2 an 2 n

T ( n ) a1n1 a2 n 2 ann n
T [1 , 2 , , n ] [T (1 ), T ( 2 ), , T ( n )]
(1 , 2 , , n ) A
a11
a
21

a12

a1n

an1

an 2 ann

a22 a2 n

A Linear Transformation T
1, 2,,n

Inner Product
Let a a1i a2 j a3 k and b b1i b2 j b3 k be two vectors in R 3
the inner product of a and b, written (a, b ) which is denoted by a b
b1
a T b [a1 a2 a3 ] b2 a1b1 a2b2 a3b3
b3
Certain properties of the inner product follow immediatel y from the
definition.
a, b, c are vectors in R 3 , and and are real scalars
(1) (a, a ) 0
(a, a ) 0 if and only if a 0
(2) (a, b ) (a, b )
(3) (a, b c) (a, b ) (a, c)
(a b, c) (a, c) (b, c)
(4) (a, b ) (b, a)

It follows from the Pythagorean theorem that the length of the vector
2

a a1i a2 j a3 k is a1 a2 a3

The length of the vector a is denoted by a


2

(a, a) a1 a2 a3 a (a, a)1/ 2


* Let a and b be nonzero vectors in R 3 and let be the angle between them
(a, b) a b cos
2

b a a b 2 a b cos
z

a1 a2 a3

|b - a|
a

a3

a1 a2

a2

a1

Inner Product Space


If V is a real vector space, a function that assigns to every pair of vectors
x and y in V a real number ( x,y ) is said to be an inner product on V , if it has
the following properties.
(1) ( x,x ) 0
( x,x ) 0 if and only if x 0
(2) ( x, y z ) ( x, y ) ( x, z )
( x y , z ) ( x, z ) ( y , z )
(3) (x, y ) ( x, y )
( x, y ) ( x, y )
(4) ( x, y ) ( y, x)
The vector space V , together with its inner product, is said to constitute an
inner product space
1
1
x and y ( x,y ) 11 2 2 n n x T y
n
n

Eigenvalues and Eigenvectors

Let T:V V be a linear operator on a vector space V .


(a) An eigenvalue of T is a scalar for which T(v) v for some nonzero vector v in V
(b) If is an eigenvalue of T, hen an eigenvector of T for the eigenvalue is any
nonzero vector v for which T(v) v.
(c) If is an eigenvalue of T , then the eigenspace of T for the eigenvalue is the set
{v V|T(v) v} consisting of all the eigenvedct ors of T for , plus 0.

4 1
x.

2 1

Ex. Let T be the linear operator on R 2 defined by T ( x)


1
Since T
1

1 4 1 1
4 1 1
1
1

3 and T
2 ,

1
2
2 1 1
2 2 1 2
it follows that 3 and 2 are eigenvalue s of T with corresponding
1
1
eigenvecto rs and , respective ly. In this case, the linear operator
1
2
T fixes the lines through the origin determined by scalar multiples of
the eigenvecto rs.
2I+4j

4
3
2

I+2j

I+j

3I+3j

2
1

Diagonalization
A square matrix is said to be a diagonal matrix if all of its entries are
zero except those on the main diagonal:

1 0
0
2

0 0

A linear operator T on a finite-dimensional vector space V is diagonalizable if


there is a basis vector for V each vector of which is an eigenvector of T.

Orthogonalization of Vector Sets


It is desireable, as in the preceeding section, to form from a set of s linearly
independent vectors u1 , u 2 , , u s an orthogonal set of s linear combinations
of the original vectors. We first select any one of the original vectors,
u
Let v1 u1 , and divide it by its length e1 1
l (u1 )
Then we choose a second vector u 2 and v 2 u 2 ce1
The requirement that v 2 be orthogonal to e1 leads to the determination
(e1 , v 2 ) (e1 , u 2 ) - c(e1 , e1 ) 0 or c (e1 , u 2 )
v 2 u 2 (e1 , u 2 )e1
Let e 2

u2
, v 3 u 3 c1e1 c2 e 2 v 3 u 3 (e1 , u 3 )e1 (e 2 , u 3 )e 2
l (u 2 )

A continuation of this process finally determines the sth member of the


required set in the form
s 1
us
es
where v s u s (e k , u s )e k
l (u s )
k 1

Gram-Schmit orthogonalization procedure

Quadratic Forms
A homogeneou s expression of second degree of the form
2

A a11 x1 a22 x2 ann xn 2a12 x1 x2 2a13 x1 x3 2an 1,n xn 1 xn is calles a quadratic form.
If we write yi

1 A
, we obtain the equations :
2 xi

a21 x1 a22 x2 a2 n xn y2

an1 x1 an 2 x2 ann xn yn

aij x j yi

Equivalent

a11 x1 a12 x2 a1n xn y1

Ax=y

aij a ji

The set of equations can be written in the form : A x y, A [aij ] is a symmetric matrix
A ( x, y ) x T A x

Canonical Form
Let the vector x be expressed in terms of x' by the equation x Q x'
A (Q x' )T A Q x' x'T Q T A Q x' or A x'T A' x'
where the new matrix A' is defined by the equation A' Q T A Q
Diagonal matrix
If the eigenvalues and corresponding eigenvectors of the real symmetric matrix
A are known, a matrix Q having this property can be easily constructed

A1e1 1e1 , , A n e n n e n
eigenvector
eigenvalue

Let a matrix Q be constructed in such a way that the elements of the unit vectors
e1, e2,.,en are the elements of the successive columns of Q:

e11
en1
e
e
e1 12 en n 2


e1n
enn

e11
e
12

e21 en1


e1n

e2 n enn

e22 en 2

1e11 2 e21
e
2 e22
1 12

AQ

1en1 2 en 2

n en1
n en 2

n enn

1 0
0
2
or A Q Q

0 0

0
0

Since the vectors e1,....., en are linearly independen t, it follows that | Q | 0


Thus the inverse Q -1 exists, and by premultipl ying the equal members of
the equation above by Q -1 we obtain the result Q -1A Q [i ij ]
Q T Q -1 or Q T Q I Orthogonal Matrix

A i x ' i

A e i i ei

Ex: Let T be the linear operator on R3 which is represented in


the standard ordered basis by the matrix A
5 6 6
A 1 4
2
3 6 4
eigenvalue 1 1, 2 2, 3 2
3

eigenvecor 1 1 , 2
3

3 2 2
Then we have Q 1 1 0

3 0 1
Orthogonal matrix

2
1 , 3 0
1
0
1 0 0
Q 1 AQ 0 2 0 D
0 0 2 Diagonal matrix

You might also like