Professional Documents
Culture Documents
Solutions
February 26, 2013
(1)
(2a)
(2b)
(x) + bSg(x)
D(af
x
x
x
(3)
Integrating from 0 to x is linear:
x
x
'
'
'
'
'
(af (x) + bg(x)) =
Q
af (x ) + bg(x )dx = a
f (x )dx + b
0
(x) + bQg(x)
= aQf
g(x' )dx' =
0
(4)
(5a)
(5b)
So A(af
and A is nonlinear.
Mapping to a xed function is not linear. As a counterexample, let f (x) = x2
and g(x) = x:
h (ax2 + bx) = h(x)
P
h x2 + bP
h x = ah(x) + bh(x)
aP
h f (x) + bP
h (af (x) + bg(x)) = aP
h g(x) and P
h is nonlinear.
So P
1
(6a)
(6b)
(7)
Oh(x)
= h(x),
(8)
where is some number (i.e. not a function) called the eigenvalue. In other words, an
eigenfunction h(x) is some function that happens to have just the right form so that
is in general very complicated, its action is on h(x) is just to
even if the operator O
multiply the function by some number (which is allowed to be complex).
is linear and h(x) is an eigenfunction, also Ah(x), i.e. the
Note that, if the operator O
Indeed,
function h(x) multiplied by a constant A, is also an eigenfunction of O.
Ah(x) = AO
h(x) = Ah(x) = (Ah(x)).
O
(9)
Because of this reason, when we deal with linear operators we dont consider the con
stant A, i.e. we set it equal to 1 and pick only one representative function h(x). Below
we shall do the same: for nonlinear operators we consider all the possible eigenfunc
tions, for linear operators we pick only one eigenfunctions among those which dier
only by a proportionality constant.
l f (x) = f (x).
(10)
Consider rst case (i), i.e. the space of arbitrary functions dened on the real
maps f (x) to itself, we conclude that any
line. Since the identity operator l
, and that the eigenvalue is always
nonvanishing function is an eigenfunction of l
1.
The above discussion holds also for the other three spaces of functions (ii)-(iv).
Again, lets rst consider case (i). To nd eigenfunctions and eigenvalues of the
square operator S, we need to solve the equation
f 2 (x) = f (x)
(11)
where is any complex number, and will be the eigenvalue associated to f (x).
The solution is
f (x) = , 0
(12)
meaning that f (x) can be equal to either or 0 at x, for example
if x [0, 1]
f (x) =
0 otherwise.
Note in particular that this function is not continuous, so the discussion so far
holds only for case (i). Regarding cases (ii),(iii), f (x) needs to be continuous,
and thus the only possibility is that f (x) is a constant equal to or 0. Since
(13)
In the space of arbitrary functions, the nonvanishing functions which satisfy this
equation are f (x) = (x x0 ), for any x0 , and the corresponding eigenvalues are
= x0 . Indeed, from Problem Set 2, we know that the equality x(x x0 ) =
(x x0 ) holds only if
Z
x(x x0 )g(x)dx =
(x x0 )g(x)
(14)
Z
x(x x0 )g(x)dx = x0 g(x0 ) =
(x x0 )g(x)
(15)
The above discussion holds only for (i). Note that f (x) = (x x0 ) is not a
This means that in cases (ii)-(iv) there are no eigenfunctions for the operator x.
One could ask: is the delta function square integrable (although not continuous)?
To answer this question, we need to use one of the denitions of delta function
1
2
2
(x x0 ) = lim e(xx0 ) /a .
a0 a
Then we have
((x x0 )) = lim
a0
1
2
2
e(xx0 ) /a
a
Z
1
2
1
2
2
e2(xx0 ) /a = lim = ,
= lim
a0
a0
2a a
2a
therefore the delta function is not square-integrable.
is
The eigenstate equation for the derivative operator D
(x) = f (x)
Df
(16)
i.e.
f (x)
= f (x)
x
the solution to this equation in the space of arbitrary functions is
f (x) = ex .
(17)
(18)
Thus, the eigenstates of the derivative operator are exponentials, and the eigen
values are all the complex numbers.
The above discussion applies also to case (ii), because the exponential is a con
tinuous function. However, we should be careful with case (iii), as generally the
exponential diverges at innity. Lets write the complex number = a + ib, where
a and b are both real. We have
f (x) = ex = eax eibx .
If a > 0, then f (x) diverges as x , and if a < 0 then f (x) diverges as
x . Consider now a = 0. Then f (x) remains nite as x , because
are the exponential ex
|f (x)|2 = 1. Thus, in case (iii), the eigenfunctions of D
with Re = 0, i.e. has to be purely imaginary, so that f (x) is non-divergent
at innity. To study case (iv), we need to select the eigenfunctions which are
square-normalizable. For any complex = a + ib, we have
Z
Z
2
|f | (x) =
e2ax = ,
(19)
Introducing (x) = ex g(x), where g(x) is any function and is any complex number,
the equation becomes
e(xL) g(x L) = ex g(x)
(20)
setting = eL , the equation simplies to
g(x L) = g(x)
(21)
(22)
Problem Set 3
8.04 Spring 2013
Solutions
February 26, 2013
(a) (2 points) By applying the commutator [TL , x] on some arbitrary function f (x) we
obtain:
T L , x f (x) = TL (
xf (x)) xTL f (x)
= TL (xf (x)) xTL f (x)
= (x L) f (x L) xf (x L)
= Lf (x L) = LTL f (x).
(23)
(24)
(25)
(26)
It is important to note that in going from line (19) to line (20) we used the fact that
in position space representation the position operator x becomes just a multiplicative
operator x.
(b) (2 points) Following the same approach as above we have, acting on some arbitrary
function f (x),
(x) = TL f (x) = TL f ' (x) = f ' (x L)
TL Df
x
TL f (x) = Df
(x L) = f (x L) = f ' (x L).
D
x
(27a)
(27b)
L2 2
+
+ . . .
(28)
x
2 x2
If it seems strange to you that squaring /x gives the second derivative and not
the rst derivative squared, Id suggest thinking about it like this when thought of
as an operator, x2 means to multiply something by x twice, so the square of /x
must mean applying the derivative operator twice i.e. taking the second derivative.
Acting on an arbitrary function f (x) gives
eL x = 1 L
e L x f (x) = f (x) L
f
L2 2 f
+
+ ...
x
2 x2
(29)
The RHS of this equation looks suspiciously like a Taylor series. A function f (u) can
be represented as a Taylor expansion about some other point u0 a distance u uu0
away:
(u)2 ''
f (u) = f (u0 + u) = f (u0 ) + uf ' (u0 ) +
f (u0 ) + . . .
(30)
2!
5
Comparing our last two equations, we see that theyre really the same thing, with
u0 x, u L. This means that what we have on the RHS of Equation 29 is
f (x L), so
eL x f (x) = f (x L) = TL f (x).
(31)
(32)
TL = eLD .
(33)
+ )
LD
[TL , x]
= LTL = L(l
(34)
and
+ , x]
+ , x]
+ , x]
LD
, x]
[TL , x] = [l
= [l
+ [LD
= L[D
(35)
Now, comparing the terms in (34) and (35) with the same powers of L, we obtain in
particular, for the term linear in L,
x].
l = [D,
(36)
[TL , x] = LTL = L
n=0
and
[TL , x]
n=0
)n =
n1
(LD
(L)n D
n!
(n
1)!
n=1
(37)
)n , x
=
n , x]
(LD
(L)n [D
n!
n!
n=0
(38)
we obtain, comparing again the terms of the same degree in L between (37) and (38),
1
n , x]
n1 = 1 [D
D
(n 1)!
n!
(39)
n , x] = nD
n1 .
[D
(40)
and thus
n and x can be seen as a
Note that this result tells us that the commutator between D
acting on D
n.
derivative with respect to D
Z
1
1
ikx
f (x L) = TL f (x) = TL
dke f (k) =
dkTL eikx f(k) =
2
2
Z
Z
1
1
ik(xL)
=
dke
f (k) =
(41)
dkeikx e ikL f(k)
2
2
Thus, the Fourier transform of f (x L) is eikL f(k), as we also worked out in Problem
Set 2. The action of TL on f(k) is then
TL f(k) = e ikL f(k)
(42)
(f) (3 points) Lets use the Taylor expansion of TL that we found in part (c):
+ )f(k)
LD
TL f(k) = eLD f(k) = (l
(43)
(44)
(45)
This result makes sense, indeed we can check it through a direct computation of the
on f(k):
action of D
Z
Z
Z
1
1
ikx
1
ikx
dke f (k) =
Df (x) =
dk e f (k) =
dkeikx ikf(k)
x 2
x
2
2
(46)
this shows that the Fourier transform of Df (x) is ikf (k), which is precisely what we
determined in (45).
(g) (3 points) We have
Z
Z
1
1
ikx
f (x) =
xeikx f (x) =
xf(k) = x
dke
dk
2
2
Z
Z
1
ikx
1
dkeikx f(x)
(47)
=
dki e f (x) = i
k
k 2
2
This tells us that
xf(k) = i f(k)
(48)
k
(h) (4 points) We have, for f (x),
x]f (x) = (xf (x)) x f (x) = f (x)
[D,
(49)
x
x
and for f(k),
x]f(k) = iki f(k) i (ikf(k)) = f(k)
[D,
(50)
k
k
and x holds for both f (x) and
This shows that the commutation relation between D
f (k). The lesson we take is that an operator statement does not depend on the choice
of representation (either position or momentum) of the functions we consider.
7
Problem Set 3
8.04 Spring 2013
Solutions
February 26, 2013
(a) (6 points) To show that C is a linear operator, we need to apply the denition of linear
operator to the commutator. Consider the usual linear combination of two functions
af (x) + bg(x). Then we have
(af (x) + bg(x)) = A(a
Bf
(x) + bBg(x))
(x) + bABg(x)
AB
= aABf
(51)
Af
(x) + bB
Ag(x)
BA
= aB
(52)
thus
B
] = (AB
B
A)(af
A(af
(x) + bg(x)) =
[A,
(x) + bg(x)) = AB(af
(x) + bg(x)) B
(x) + bABg(x)
Af
(x) + bB
Ag(x)
B]f
(x) + b[A,
B]g(x).
= aABf
aB
= a[A,
(53)
(54)
A
ab = Ba
ab = baab
(55)
B
on ab can be interchanged, precisely because the
we infer that the action of A and B
eigenvalues a and b are complex numbers, and so they commute. We conclude that
B
]ab = (ab ba)ab = 0
[A,
(56)
Problem Set 3
8.04 Spring 2013
Solutions
February 26, 2013
x (x(x)) x x (x) = (x)
i
i
i
(57)
(b) (4 points) Since the commutator of x and p is nonvanishing on any function, they
cant have any common eigenfunction. Indeed, suppose that (x) is an eigenfunction
for both x and p with eigenvalues ax , ap , respectively. Then
i
(x) = [
x, p](x) = (ax ap ap ax )(x) = 0
(58)
= k(x)
(59)
TL (x) = e ikL (x).
(60)
Physically, this tells us that states with denite momentum are translationally invariant
up to an overall phase. We will return to this example later in the semester, where it
will play an important role in explaining the physics of solids.
to share an eigenfunction , we need that [A,
B]
= 0.
(d) (3 points) In order for A and B
is thus an eigenfunction of the commutator with eigenvalue zero. Thus, in order to
share an eigenfunction, the commutator must have at least one zero eigenvalue. For
example, this does not happen for x and p, which commute to the identity. As we
argued in part (b), all eigenvalues of the identity are non-zero, and thus x and p can
share no common eigenfunctions.
(e) (3 points) According to classical mechanics, given suciently precise measurements,
all observables can in principle be determined with total certainty. However, as we saw
with the boxes, or the 2-slit experiment, this is empirically false: in some situations,
knowledge of one observable can imply irreducible uncertainty about other observables.
In quantum mechanics, this remarkable fact is encoded by representing observables with
operators and the state as a wavefunction. A quantum observable can thus only be
said to have a well-dened value when the state (the wavefunction) is an eigenfunction
of the corresponding operator. 1
1
Conversely, if you pick a random state for the system, your operator will in general not have any welldened value, though the probability for measuring any particular value can be determined.
When can two observables have denite values simultaneously? To have such certainly,
the wavefunction must be a simultaneous eigenfunction of both of the corresponding
operators. But such a shared eigenfunction can only exist if the commutator of the
two operators can vanish. Thus, if the commutator of two observables does not vanish,
there is an irreducible uncertainty in the values of those observables.
and C do not commute. The reason for this is that an electron with
(f) (3 points) H
a denite color cant have a denite hardness. Indeed, recall from Lecture 1 that if
we throw a beam of white electrons through a hardness box, the outcome will be 50%
and C were commuting observables, we would be able to
white and 50% black. If H
select simultaneously a denite value of hardness and a denite value of color.
To be more explicit, consider the experiment from Lecture 1 where we throw a beam
of electrons through the Color box, the Hardness box, and then the Color box again.
More precisely, what we do is to:
measure color, select white electrons
measure hardness, select soft electrons
measure color again.
We saw that the nal outcome is 50% white electrons and 50% black electrons, instead
of being 100% white electrons. This means that measuring hardness interferes with
measuring color, or in other words, it is impossible to know hardness and color at
the same time. Thus, the uncertainty between hardness and color is nonzero, and we
conclude that the hardness and color observables have nonvanishing commutator:
C ] =
[H,
6 0.
(61)
(g) (6 points) We need to nd a representation of the four functions which gives the
C
eigenvalues required by the problem, and such that the commutator between H,
does not vanish. W has to be a superposition of H and S , i.e.,
W = H + S .
(62)
Since a white electron has equal probability of being hard or soft, we impose ||2 =
||2 = 1/2. The most general possible superposition can then be written as
1
1 i
W = H + e S ei .
(63)
2
2
As argued in class, the overall phase is immaterial, and we can set = 0, which leaves
us with
1
1
W = H + e i S .
(64)
2
2
A simple guess for W could then be
1
W = (H + S ).
2
10
(65)
(66)
C are
and verify that these states satisfy the requirements we need. The operators H,
already dened in terms of the eigenvalues associated to the four functions. Given this
on it:
representation, the hardness of w is given by the action of H
W =H
1 (H + S ) = 1 (H S ) = B
H
2
2
(67)
this equation tells us that W does not have a denite hardness, because its not an
Analogously,
eigenfunction of H.
B=H
1 (H S ) = 1 (H + S ) = W
H
2
2
(68)
1
S = (W B )
2
(69)
(70)
S = C 1 (W B ) = 1 (W B ) = H
(71)
C
2
2
thus, hard electrons do not have denite colors, as well as black electrons. Last but
and C:
(72)
C ]S = H
H + C
S = H H = 2H
[H,
C ]W = H
W C
B = B B = 2B
[H,
(73)
C]
B = H
B C
W = W + W = 2W
[H,
(75)
(74)
As anticipated in the part (f), the commutator is nonvanishing. The fact that its
nonvanishing on any eigenfunction, means that there exist no function with a precise
value for both hardness and color.
(h) (4 points) It is dicult to construct a theory based on nonlinear operators that
is consistent with the principle of superposition and the probability interpretation.
Recall that the principle of superposition says that if 1 (x) and 2 (x) are suitable
wavefunctions for describing a particle in a given system, then (x) 1 (x) + 2 (x)
g from
is also an acceptable solution. Now, lets consider the map-to-g(x) operator P
11
Problem 1 and consider what happens when this operator acts on 1 , 2 , and . The
rst two wavefunctions give us no trouble:
g 1 = g(x)
P
g 2 = g(x)
P
(76a)
(76b)
(77)
(78)
12
MIT OpenCourseWare
http://ocw.mit.edu
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.