Hi all, newbie at quantum stuff!

I know about quantum jumps from my work in

lasers and such in the photonics industry I have

been in for the past 10 years but only as a tech

setting up semiconductor machines, ion implanters,

sputtering machines, electron and optical

microscopes, etching, spin dryers, horizontal

and vertical furnaces, in short, hardware is my

thing.  So I look forward to some work to

understand QM a little better.

Hello

“Soothfast” was not available, oddly, so I go by “Soothseeker.”

I’m not right away seeing how one enters LaTeX here, so I’ll have to read up on that.

Still, I’m going to try something….

\int_0^{\infty} e^{-x} dx

(D_{n-1}^{m_{\sigma}}\circ\partial_n)(\sigma) - \bar{D}_{n-1}(\partial_n\sigma) = \sum_{i=0}^n (-1)^i D_{n-1}^{m_{\sigma}}(\sigma_i) - \sum_{i=0}^n (-1)^i D_{n-1}^{m_{\sigma i}}(\sigma_i)

Time-independent Schrodinger equation 02

— 2. The infinite square well —

Imagine a situation where a projectile in confined to a one dimensional movement and bounces off two infinitely rigid walls while conserving kinetic energy.

This situation can be modeled by the following potential:

\displaystyle f(x) = \begin{cases} 0 \quad 0\leq x \leq a\\ \infty \quad \mathrm{otherwise} \end{cases}

Classically speaking the description is basically what we said in our initial paragraph and on what follows we’ll derive the Quantum Mechanical description of the physics that result from this potential.

Outside the potential well it is {\psi=0} since the wave function cannot exist outside the well.

Inside the well the potential is {0}, hence the particle is a free particle and its wave equation is

\displaystyle -\frac{\hbar ^2}{2m}\frac{d^2 \psi}{dx^2}=E \psi

Since {E>0} we can define a new quantity {k}

\displaystyle k=\frac{\sqrt{2mE}}{\hbar}

and rewrite the wave equation in the following way

\displaystyle \frac{d^2 \psi}{dx^2}=-k^2\psi

The previous equation is the equation of a harmonic oscillator whose solution in known to be of the form

\displaystyle \psi=A\sin kx+B\cos kx

Were {A} and {B} are arbitrary constants whose value will be defined by normalization and boundary conditions.

The boundary conditions that have to be respected are

  1. {\psi(0)=A\sin k0+B\cos k0=0 }
  2. {\psi(a)=A\sin ka+B\cos ka=0 }

since one has to have continuity of the wave function and outside the potential well the wave function is vanishing.

The first condition implies {B=0} and hence the wave function simply is

\displaystyle \psi=A\sin kx

For the second condition it is {A\sin ka=0}. This implies that either {A=0} or {\sin ka=0}. The first possibility can be discarded since with {A=B=0} one would have {\psi (x)=0} and that solution has no physical interest whatsoever. Hence one is left with {\sin ka=0}.

The previous condition implies that

\displaystyle ka=0,\pm\pi,\pm 2\pi,\pm 3\pi,\cdots

Since {k=0} again leads us to {\psi (x)=0} we’ll discard this value.

The parity of the sine function ({\sin (-x)=-\sin x}) allows one to absorb the sign of the negative solutions into the constant {A} (which remains undetermined) and we are left with

\displaystyle k_n=\frac{n\pi}{a}

where {n} runs from {1} to infinity.

Since the second boundary condition determines the allowed values of {k} it also determines the allowed values of the energy of the system.

\displaystyle E_n=\frac{\hbar ^2 k_n^2}{2m}=\frac{n^2\pi^2\hbar^2}{2ma^2}

Even though in a classical context a particle trapped in a infinite square well can have any value for its energy that doesn’t happen in the quantum mechanical context. The allowed values for the energy arise from the fact that we imposed boundary conditions in a differential equation. Even in classical contexts one is sure to face quantization conditions as long as one imposes boundary conditions in differential equations.

The wave functions are

\displaystyle \psi_n(x)=A\sin\left(\frac{n\pi}{a}x\right)

At this point one just have to find the value of {A} in order to determine the time-independent wave function.

In order to determine {A} one must normalize the wave function:

{\begin{aligned} 1&=\int_0^a|A|^2\sin^2(kx)dx\\ &=|A|^2\dfrac{a}{2} \end{aligned}}

Hence

\displaystyle A=\sqrt{\frac{2}{a}}

Where we have chosen the positive real root because the phase of {A} has no physical significance.

Finally one can write the wave functions that represent the particle inside of the infinite square well:

\displaystyle \psi_n(x)=\sqrt{\frac{2}{a}}\sin\left(\frac{n\pi}{a}x\right)

Like we said previously the time-independent Schroedinger equation has an infinite set of solutions. As we can see in the expression for {E} the energy of the particle increases as {n} increases. For that reason the state {n=1} is said to be the ground state while the other values of {n} are said to be excited states.

— 2.0.1. Properties of infinite square well solutions —

The solutions we just found to the infinite square well have some interesting properties. As an example we’ll sketch the first four solutions to the time-independent Schroedinger equation (here we set {a=1}).

  1. The wave functions are alternatively even and odd relative to the center of square well.
  2. The wave functions of successive energy states have one more node than the wave function that precedes it. {\psi_11} has {0} nodes, {\psi_2} has one node, {psi_3} has two nodes, and so on and so forth.
  3. The wave functions are orthonormal

    \displaystyle \int\psi_m^*(x)\psi_n(x)dx =\delta_{mn}

  4. The set of wave functions is complete.

    \displaystyle f(x)=\sum_{n=1}^{\infty}c_n\psi_n(x)=\sqrt{\frac{2}{a}}\sum_{n=1}^{\infty}c_n\sin\left(\frac{n\pi}{a}x\right)

    To evaluate the coefficients {c_n} one uses the expression {\int \psi_m^*(x)f(x)dx}. The proof is

    {\begin{aligned} \int \psi_m^*(x)f(x)dx&=\sum_{n=1}^{\infty}c_n\int \psi_m^*(x)\psi_n(x)dx\\ &=\sum_{n=1}^{\infty}c_n\delta_{mn}\\ &=c_m \end{aligned}}

The first property is valid for all symmetric potentials. The second property is always valid. Orthonormality also is universally valid. The property of completeness is the more subtle one, but for all practical purposes we can consider that for every potential we’ll encounter that the set of solutions is complete.

— 2.1. Time-dependent solutions —

The stationary states for the infinite square well are

\displaystyle \Psi_n(x,t)=\sqrt{\frac{2}{a}}\sin\left(\frac{n\pi}{a}x\right)e^{-in^2\pi^2\hbar/(2ma^2)t} \ \ \ \ \ (1)

 

For our job to be done we need to show that general solutions to the time-dependent Schroedinger equation can be written as linear combinations of stationary states.

In order to do that one must first write the general solution for {t=0}

\displaystyle \Psi(x,0)=\sum_{n=1}^\infty c_n\psi _n(x)

Since {\psi _n} form a complete set we know that {\Psi (x,0)} can be written in that way. Using the orthonormality condition we know that the coefficients are:

\displaystyle c_n=\sqrt{\frac{2}{a}}\int_0^a \sin\left(\frac{n\pi}{a}x\right)\Psi(x,0)dx

With this we can write

\displaystyle \Psi(x,t)=\sum_{n=1}^\infty c_n\sqrt{\frac{2}{a}}\sin\left(\frac{n\pi}{a}x\right)e^{-in^2\pi^2\hbar/(2ma^2)t}

which is the most general solution to the infinite square well potential.

 

Time-independent Schrodinger equation 01

— 1. Stationary states —

In the previous posts we’ve normalized wave functions, we’ve calculated expectation values of momenta and positions but never at any point we’ve made a quite logical question:

How does one calculate the wave function in the first place?

The answer to that question obviously is:

You have to solve the Schroedinger equation.

The Schroedinger equation is

\displaystyle i\hbar\frac{\partial \Psi(x,t)}{\partial t}=-\frac{\hbar^2}{2m}\frac{\partial^2\Psi(x,t)}{\partial x^2}+V\Psi(x,t)

Which is partial differential equation of second order. Partial differential equations are very hard to solve whereas ordinary differential equations are easily solved.

The trick is to to turn this partial differential equation into ordinary differential equation.

To do such a thing we’ll employ the separation of variables technique.

We’ll assume that {\Psi(x,t)} ca be written as the product of two functions. One of the functions is a function of the position alone whereas the other function is solely a function of {t}.

\displaystyle  \Psi(x,t)=\psi(x)\varphi(t)

This restriction might seem as overly restrictive to the class of solutions of the Schroedinger Equations, but in this case appearances are deceiving. As we’ll see later on more generalized solutions of the Schroedinger Equation can be constructed with separable solutions.

Calculating the appropriate derivatives for {\Psi(x,t)} yields:

\displaystyle  \frac{\partial \Psi}{\partial t}=\psi\frac{d\varphi}{dt}

and

\displaystyle  \frac{\partial^2 \Psi}{\partial x^2} = \frac{d^2 \psi}{d x^2}\varphi

Substituting the previous equations into the Schroedinger equation results is:

\displaystyle  i\hbar\psi\frac{d\varphi}{dt}=-\frac{\hbar^2}{2m}\frac{d \psi^2}{d x^2}\varphi+V\psi\varphi

Dividing the previous equality by {\psi\varphi}

\displaystyle  i\hbar\frac{1}{\varphi}\frac{d\varphi}{dt}=-\frac{\hbar^2}{2m}\frac{1}{\psi}\frac{d \psi^2}{d x^2}+V

Now in the previous equality the left-hand side is a function of {t} while the right-hand side is a function of {x} (remember that by hypothesis {V} isn’t a function of {t}).

These two facts make the equality expressed in the last equation require a very fine balance. For instance if one were to vary {x} without varying {t} then the right-hand side would change while left-hand side would remain the same spoiling our equality. Evidently such a thing can’t happen! Te only way for all equality to hold is that both sides of the equation are in fact constant. That way there’s no more funny business of changing one side while the other remains constant.

For reasons that will become obvious in the course of this post we’ll denote this constant (the so-called separation constant) by {E}.

\displaystyle  i\hbar\frac{1}{\varphi}\frac{d\varphi}{dt}=E \Leftrightarrow \frac{d\varphi}{dt}=-\frac{i E}{\hbar}\varphi

and for the second equation

\displaystyle  -\frac{\hbar^2}{2m}\frac{1}{\psi}\frac{d^2 \psi}{d x^2}+V=E \Leftrightarrow -\frac{\hbar^2}{2m}\frac{d^2 \psi}{d x^2}+V\psi=E\psi

The first equation of this group is ready to be solved and a solution is

\displaystyle  \varphi=e^{-i\frac{E}{\hbar}t}

The second equation, the so-called time-independent Schroedinger equation can only be solved when a potential is specified.

As we can see the method of separable solutions had lived to my promise. With it we were able to produce two ordinary equations which can in principle be solved. In fact one of the equations is already solved.

At this point we’ll state a few characteristics of separable solutions in order to better understand their importance (of one these characteristics was already hinted before):

  1. Stationary states

    The wave function is

    \displaystyle  \Psi(x,t)=\psi(x)e^{-i\frac{E}{\hbar}t}

    and it is obvious that it depends on {t}. On the other hand the probability density doesn’t depend on {t}. This result can easily be proven with the implicit assumption that {E} is real (in a later exercise we’ll see why {E} has to be real).

    \displaystyle \Psi(x,t)^*\Psi(x,t)=\psi^*(x)e^{i\frac{E}{\hbar}t}\psi(x)e^{-i\frac{E}{\hbar}t}=|\psi(x)|^2

    If we were interested in calculating the expectation value of any dynamical variable we would see that those values are constant in time.

    \displaystyle  <Q(x,p)>=\int\Psi^*Q\left( x,\frac{\hbar}{i}\frac{\partial}{\partial x} \right)\Psi\, dx

    In particular {<x>} is constant in time and as a consequence {<p>=0}.

  2. Definite total energy

    As we saw in classical mechanics the Hamiltonian of a particle is

    \displaystyle H(x,p)=\frac{p^2}{2m}+V(x)

    Doing the appropriate substitutions the corresponding quantum mechanical operator is (in quantum mechanics operators are denoted by a hat):

    \displaystyle \hat{H}=-\frac{\hbar^2}{2m}\frac{d^2}{d x^2}+V

    Hence the time-independent Schroedinger equation can be written in the following form:

    \displaystyle  \hat{H}\psi=E\psi

    The expectation value of the Hamiltonian is

    \displaystyle  <\hat{H}>=\int\psi ^*\hat{H}\psi\, dx=E\int|\psi|^2\, dx=E

    It also is

    \displaystyle  \hat{H}^2\psi=\hat{H}(\hat{H}\psi)=\hat{H}(E\psi)=E\hat{H}\psi=EE\psi=E^2\psi

    Hence

    \displaystyle  <\hat{H}^2>=\int\psi ^*\hat{H}^2\psi\, dx=E^2\int|\psi|^2\, dx=E^2

    So that the variance is

    \displaystyle \sigma_{\hat{H}}^2=<\hat{H}^2>-<\hat{H}>^2=E^2-E^2=0

    In conclusion in a stationary state every energy measurement is certain to return the value {E} since that the energy distribution has value {E}.

  3. Linear combination

    The general solution of the Schroedinger equation is a linear combination of separable solutions.

    We’ll see in future examples and exercises that the time-independent Schroedinger equation holds an infinite number of solutions. Each of these different wave functions is associated with a different separation constant. Which is to say that for each allowed energy level there is a different wave function.

    It so happens that for the time-dependent Schroedinger equation any linear combination of a solution is itself a solution. After finding the separable solutions the task is to construct a more general solution of the form

    \displaystyle \Psi(x,t)=\sum_{n=1}^{+\infty}c_n\psi_n(x)e^{-i\frac{E_n}{\hbar}t}=\sum_{n=1}^{+\infty}c_n\Psi_n(x,t)

    The point is that every solution of the time-dependent Schroedinger equation can be written like this with the initial conditions of the problem being being studied fixing the values of the constants {c_n}.

    I understand that all of this may be a little too abstract so we’ll solve a few exercises to make it more palatable.

As an example we’ll calculate the time evolution of a particle that starts out in a linear combination of two stationary states:

\displaystyle \Psi(x,0)=c_1\psi_1(x)+c_2\psi_2(x)

For the sake of our discussion let’s take {c_n} and {\psi_n} to be real.

Hence the time evolution of the particle is simply:

\displaystyle  \Psi(x,t)=c_1\psi_1(x)e^{-i\frac{E_1}{\hbar}t}+c_2\psi_2(x)e^{-i\frac{E_2}{\hbar}t}

For the probability density it is

{\begin{aligned} |\Psi(x,t)|^2 &= \left( c_1\psi_1(x)e^{i\frac{E_1}{\hbar}t}+c_2\psi_2(x)e^{i\frac{E_2}{\hbar}t} \right) \left( c_1\psi_1(x)e^{-i\frac{E_1}{\hbar}t}+c_2\psi_2(x)e^{-i\frac{E_2}{\hbar}t} \right)\\ &= c_1^2\psi_1^2+c_2^2\psi_2^2+2c_1c_2\psi_1\psi_2\cos\left[ \dfrac{E_2-E_1}{\hbar}t \right] \end{aligned}}

As we can see even though {\psi_1} and {\psi_2} are stationary states and hence their probability density is constant the probability density of the final wave function oscillates sinusoidally with angular frequency {(E_2-E_1)/t}.

Prove that for for normalizable solutions the separation constant {E} must be real.

Let us write {E} as

{E=E_0+i\Gamma}

Then the wave equation is of the form

\displaystyle  \Psi(x,t)=\psi(x)e^{-i\frac{E_0}{\hbar}t}e^{\frac{\Gamma}{\hbar}t}

{\begin{aligned} 1 &= \int_{-\infty}^{+\infty}|\Psi(x,t)|^2\, dx \\ &= \int_{-\infty}^{+\infty} \psi(x,t)^*\psi(x,t)e^{-i\frac{E_0}{\hbar}t}e^{i\frac{E_0}{\hbar}t}e^{\frac{\Gamma}{\hbar}t}e^{\frac{\Gamma}{\hbar}t}\, dx \\ &= e^{\frac{2\Gamma}{\hbar}t}\int_{-\infty}^{+\infty}|\psi(x,t)|^2\, dx \end{aligned}}

The final expression has to be equal to {1} to all values of {t}. The only way for that to happen is that we set {\Gamma=0}. Thus {E} is real.

Show that the time-independent wave function can always be taken to be a real valued function.

We know that {\psi(x)} is a solution of

\displaystyle  -\frac{\hbar^2}{2m}\frac{d^2 \psi}{d x^2}+V\psi=E\psi

Taking the complex conjugate of the previous equation

\displaystyle  -\frac{\hbar^2}{2m}\frac{d^2 \psi^*}{d x^2}+V\psi^*=E\psi^*

Hence {\psi^*} is also a solution of the time-independent Schroedinger equation.

Our next result will be to show that if {\psi_1} and {\psi_2} are solutions of the time-independent Schroedinger equation with energy {E} then their linear combination also is a solution to the time-independent Schroedinger equation with energy {E}.

Let’s take

\displaystyle \psi_3=c_1\psi_1+c_2\psi_2

as the linear combination.

{\begin{aligned} -\frac{\hbar^2}{2m}\frac{d^2 \psi_3}{d x^2}+V\psi_3 &= -\frac{\hbar^2}{2m}\left( c_1\dfrac{\partial ^2\psi_1}{\partial x^2}+c_2\dfrac{\partial ^2\psi_2}{\partial x^2} \right)+ V(c_1\psi_1+c_2\psi_2)\\ &= c_1\left( -\frac{\hbar^2}{2m}\dfrac{\partial ^2\psi_1}{\partial x^2}+V\psi_1 \right)+c_2\left( -\frac{\hbar^2}{2m}\dfrac{\partial ^2\psi_2}{\partial x^2}+V\psi_2 \right)\\ &= c_1E\psi_1 + c_2E\psi_2\\ &= E(c_1\psi_1+c_2\psi_2)\\ &= E\psi_3 \end{aligned}}

After proving this result it is obvious that {\psi+\psi^*} and that {i(\psi-\psi^*)} are solutions to the time-independent Schroedinger equation. Apart from being solutions to the time-independent Schroedinger equation it is also evident from their construction that these functions are real functions. Since they have same value for the {E} as {\psi} we can use either one of them as a solution to the time-independent Schroedinger equation

If {V(x)} is an even function than {\psi(x)} can always be taken to be either even or odd.

Since {V(x)} is even we know that {V(-x)=V(x)}. Now we need to prove that if {\psi(x)} is a solution to the time-independent Schroedinger equation then {\psi(-x)} also is a solution.

Changing from {x} to {-x} in the time-independent Schroedinger equation

\displaystyle  -\frac{\hbar^2}{2m}\frac{d^2 \psi(-x)}{d (-x)^2}+V(-x)\psi(-x)=E\psi(-x)

in order to understand what’s going on with the previous equation we need to simplify

\displaystyle  \dfrac{d^2}{d (-x)^2}

Let us introduce the variable {u} and define it as {u=-x}. Then

\displaystyle \frac{d}{du}=\frac{dx}{du}\frac{d}{dx}=-\frac{d}{dx}

And for the second derivative it is

\displaystyle  \frac{d^2}{du^2}=\frac{dx}{du}\frac{d}{dx}\frac{dx}{du}\frac{d}{dx}=\left(-\frac{d}{dx}\right)\left(-\frac{d}{dx}\right)=\frac{d^2}{dx^2}

In the last expression {u} is a dummy variable and thus can be substituted by any other symbol (also see this post Trick with partial derivatives in Statistical Physics to see what kind of manipulations you can do with change of variables and derivatives). For convenience we’ll change it back to {x} and it is

\displaystyle  \dfrac{d^2}{d (-x)^2}=\dfrac{d^2}{d x^2}

So that our initial expression becomes

\displaystyle  -\frac{\hbar^2}{2m}\frac{d^2 \psi(-x)}{d x^2}+V(-x)\psi(-x)=E\psi(-x)

Using the fact that {V(x)} is even it is

\displaystyle  -\frac{\hbar^2}{2m}\frac{d^2 \psi(-x)}{d x^2}+V(x)\psi(-x)=E\psi(-x)

Hence {\psi(-x)} is also a solution to the time-independent Schroedinger equation.

Since {\psi(x)} and {\psi(-x)} are solutions to the time-independent Schroedinger equation whenever {V(x)} is even we can construct even and odd functions that are solutions to the time-independent Schroedinger equation.

The even functions are constructed as

\displaystyle  h(x)=\psi(x)+\psi(-x)

and the odd functions are constructed as

\displaystyle  g(x)=\psi(x)-\psi(-x)

Since it is

\displaystyle  \psi(x)=\frac{1}{2}(h(x)+g(x))

we have showed that any solution to the time-independent Schroedinger equation can be expressed as a linear combination of even and odd functions when the potential function is an even function.

Show that {E} must exceed the minimum value of {V(x)} for every normalizable solution to the time-independent Schroedinger equation.

Rewriting the time-independent Schroedinger equation in order of its second x derivative

\displaystyle \frac{d^2\psi}{dx^2}=\frac{2m}{\hbar ^2}(V(x)-E)\psi

Let us proceed with a proof by contradiction and assume that we have {V_{\mathrm{min}}>E}. Using the previous equation this implies that {\dfrac{d^2\psi}{dx^2}} and {\psi } have the same sign. This comes from the fact that {\frac{2m}{\hbar ^2}(V(x)-E)} is positive.

Let us suppose that {\psi} is always positive. Then {\dfrac{d^2\psi}{dx^2}} is also always positive. Hence {\psi} is concave up. In the first quadrant the graph of the function is shaped like

Since by hypothesis our function is normalizable it needs to go to {0} as {x\rightarrow -\infty}. in order for the function to go to {0} the plot needs to do something like this

Such a behavior would imply that there is region of space where the function is positive and its second derivative is negative (in our example such a region is delimited in {-0.5\leq x \leq 0.5}).

Such a behavior is in direct contradiction with the conclusion that {\psi} and {\dfrac{d^2\psi}{dx^2}} always have the same sign. Since this contradiction arose from the hypothesis that {V_{\mathrm{min}}>E} the logical conclusion is that {V_{\mathrm{min}}<E}.

With {V_{\mathrm{min}}<E}, {\psi} and {\dfrac{d^2\psi}{dx^2}} no longer need to have the same sign at all times. Hence {\psi} can turn over to {0} and {\psi} can go to {0}.

The Wave Function 05

— 1.6. The Uncertainty Principle —

Imagine that is holding a rope in one’s hand at that the rope is tied at the end to brick wall. If one suddenly jerks the rope it would cause a pulse formation that would travel along the rope until hitting the brick wall. At every instant of time you could fairly reasonably ascribe a position to this wave pulse but on the other hand if you would be asked to calculate its wavelength you wouldn’t know how to do it since this phenomenon isn’t periodic.

Imagine now that, instead of just producing one jerk, you continuously wave the rope so that you end up producing a standing wave. In this case the wavelength is perfectly defined, since this a phenomenon that is periodic, but the wave position loses its meaning.

Quantum mechanics, as we’ll see in later posts, asks for a particle description that is given in terms of wave packets. Roughly speaking, a wave packet is the result of summing an infinite number of waves (with different wave numbers and phases) that exhibit constructive interference in just a small region of space. An infinite number of waves with different momenta is needed to ensure constructive and destructive interference in the appropriate regions of space.

Wave Packet

Hence we see that by summing more and more waves we are able to make the position of the particle more and more defined while simultaneously making its momentum less and less defined (remember that the waves that we are summing all have different momenta).

In a more formal language one would say that one is working in two different spaces. The position space and the momentum space. What we’re seeing is that in the wave packet formalism it is impossible to have a phenomenon that is perfectly localized in both spaces at the same time.

More physically speaking this means that for a particle its position and momentum have an inherent spread. One can theoretically make the spread of one of the quantities as small as one wants but that would cause the spread in the other quantity to get larger and larger. That is to say the more localized a particle is the more its momentum is spread and the more precise a particle’s momentum is the more fuzzy is its position.

This result is known as Heisenberg’s uncertainty principle and one can make it a mathematically rigorous, but for now this handwaving argument is enough. With it we can already see that Quantum Mechanics needs a radical new way of confronting reality.

For now we’ll just put this result in a quantitative footing and leave its proof for a later post

\displaystyle \sigma_x \sigma_p \geq \frac{\hbar}{2} \ \ \ \ \ (31)

 

One can interpret the uncertainty principle in the language of measurements being made on an ensemble of identically prepared systems. Imagine that you prepare an ensemble whose position measurements are very defined. That is to say that every time you measure the position of a particle the results are very much alike. Well, in this case if you were to also measure the momentum of each particle you would see that the values of momentum you’d end up measuring would be wildly different.

On the other hand you could possibly want to have an ensemble of particles whose momentum measurements would end up with values that have small differences between them. In this case the price to pay would be that the positions of the particles would be scattered all over the place.

Evidently that between those two extremes there is a plethora of possible results. The only limitation that the uncertainty principle stipulates is that the product of the spreads of the two quantities has to be bigger than {\dfrac{\hbar}{2}}.

Exercise 5 A particle of mass {m} is in the state

\displaystyle \Psi(x,t)=Ae^{-a\left[\dfrac{mx^2}{\hbar}+it\right]} \ \ \ \ \ (32)

where {A} and {a} are positive constants.

Find A

To find the value of {A} one has to normalize the wave function

{\begin{aligned} 1 &= \int_{-\infty}^{+\infty} |\Psi(x,t)|^2\,dx\\ &= |A|^2\int_{-\infty}^{+\infty} e^{2a\dfrac{mx^2}{\hbar}}\, dx\\ &= |A|^2 \sqrt{\dfrac{\hbar\pi}{2am}} \end{aligned}}

Thus

\displaystyle A=\sqrt[4]{\frac{2am}{\hbar\pi}}

For what potential energy function {V(x)} does {\Psi} satisfy the Schroedinger equation?

The Schroedinger equation is

\displaystyle i\hbar\frac{\partial \Psi}{\partial t}=-\frac{\hbar^2}{2m}\frac{\partial^2\Psi}{\partial x^2}+V\Psi \ \ \ \ \ (33)

For the first term it follows

\displaystyle \frac{\partial \Psi}{\partial t}=-ia\Psi

For the first {x} derivative it is

\displaystyle \frac{\partial \Psi}{\partial x}=-\frac{2amx}{\hbar}\Psi

For the second order {x} derivative it is

{\begin{aligned} \frac{\partial ^2 \Psi}{\partial x^2} &= -\frac{2am}{\hbar}\Psi+ \dfrac{4a^2m^2x^2}{\hbar ^2}\Psi\\ &= -\dfrac{2am}{\hbar}\left( 1-\dfrac{2amx^2}{\hbar} \right)\Psi \end{aligned}}

Replacing these expressions into the Schroedinger equation yields

{\begin{aligned} V\Psi &= i\hbar\dfrac{\partial \Psi}{\partial t}+\dfrac{\hbar ^2}{2m}\dfrac{\partial^2 \Psi}{\partial x^2}\\ &= a\hbar\Psi+\dfrac{\hbar ^2}{2m}\left[ -\dfrac{2am}{\hbar} \left( 1-\dfrac{2amx^2}{\hbar} \right)\Psi \right]\\ &= a\hbar\Psi-a\hbar\Psi+\hbar a\dfrac{2amx^2}{\hbar}\Psi\\ &= 2ma^2x^2\Psi \end{aligned}}

Thus

\displaystyle V=2ma^2x^2

Calculate the expectation values of {x}, {x^2}, {p} and {p^2}.

The expectation value of {x}

\displaystyle <x>=|A|^2\int_{-\infty}^{+\infty}xe^{-2ax\frac{x^2}{\hbar}}\, dx=0

The expectation value of {p}

\displaystyle <p>=m\frac{d<x>}{dt}=0

The expectation value of {x^2}

{\begin{aligned} <x^2> &= |A|^2\int_{-\infty}^{+\infty}x^2e^{-2ax\frac{x^2}{\hbar}}\, dx\\ &= 2|A|^2\dfrac{1}{4(2m/\hbar)}\sqrt{\dfrac{\pi\hbar}{2am}}\\ &= \dfrac{\hbar}{4am} \end{aligned}}

The expectation value of {p^2}

{\begin{aligned} <p^2> &= \int_{-\infty}^{+\infty}\Psi ^* \left( \dfrac{\hbar}{i}\dfrac{\partial }{\partial x} \right)^2\Psi\, dx\\ &= -\hbar ^2\int_{-\infty}^{+\infty}\Psi ^* \dfrac{\partial ^2 \Psi}{\partial x^2}\, dx\\ &= -\hbar ^2\int_{-\infty}^{+\infty}\Psi ^* \left[ -\dfrac{2am}{\hbar} \left( 1-\dfrac{2amx^2}{\hbar} \right)\Psi \right]\, dx\\ &= 2am\hbar\int_{-\infty}^{+\infty}\Psi ^* \left( 1-\dfrac{2amx^2}{\hbar} \right)\Psi\, dx\\ &= 2am\hbar\left[ \int_{-\infty}^{+\infty}\Psi ^*\Psi\, dx -\dfrac{2am}{\hbar}\int_{-\infty}^{+\infty}\Psi ^* x^2 \Psi\, dx\right]\\ &= 2am\hbar\left[ 1-\dfrac{2am}{\hbar}<x^2> \right]\\ &= 2am\hbar\left[ 1-\dfrac{2am}{\hbar}\dfrac{\hbar}{4am}\right]\\ &=2am\hbar\left( 1-1/2 \right)\\ &=am\hbar \end{aligned}}

Find {\sigma_x} and {\sigma_p}. Is their product consistent with the uncertainty principle?

\displaystyle \sigma_x=\sqrt{<x^2>-<x>^2}=\sqrt{\dfrac{\hbar}{4am}}

\displaystyle \sigma_p=\sqrt{<p^2>-<p>^2}=\sqrt{am\hbar}

And the product of the two previous quantities is

\displaystyle \sigma_x \sigma_p=\sqrt{\dfrac{\hbar}{4am}}\sqrt{am\hbar}=\frac{\hbar}{2}

The product is consistent with the uncertainty principle.

 

The Wave Function 04

— 1.5. Momentum and other Dynamical quantities —

Let us suppose that we have a particle that is described by the wave function {\Psi} then the expectation value of its position is (as we saw in The Wave Function 02):

\displaystyle <x>=\int_{-\infty}^{+\infty}x|\Psi(x,t)|^2\, dx

Neophytes interpret the previous equations as if it was saying that the expectation value coincides with the average of various measurements of the position of a particle that is described by {\Psi}. This interpretation is wrong since the first measurement will make the wave function collapse to the value that is actually obtained and if the following measurements of the position are done right away they’ll just be of the same value of the first measurement.

Actually {<x>} is the average of position measurements of particles that are all described by the state {\Psi}. That is to say that we have two ways of actually accomplishing what is implied by the previous interpretation of {<x>}:

  1. We have a single particle. Then after a position measurement is made we have to able to make the particle to return to its {\Psi} state before we make a new measurement.
  2. We have a collection – a statistical ensemble is a more respectable name – of a great number of particles (in order for it to be statistically significant) and we arrange them all to be in state {\Psi}. If we perform the measurement of the position of all this particles, then average of the measurements should be {<x>}.

To put it more succinctly:

The expectation value is the average of repeated measurements on an ensemble of identically prepared systems.

Since {\Psi} is a time dependent mathematical object it is obvious that {<x>} also is a time dependent quantity:

{\begin{aligned} \dfrac{d<x>}{dt}&= \int_{-\infty}^{+\infty}x\dfrac{\partial}{\partial t}|\Psi|^2\, dx \\ &= \dfrac{i\hbar}{2m}\int_{-\infty}^{+\infty}x\dfrac{\partial}{\partial x}\left( \Psi^*\dfrac{\partial \Psi}{\partial x}-\dfrac{\partial \Psi^*}{\partial x}\Psi \right)\, dx \\ &= -\dfrac{i\hbar}{2m}\int_{-\infty}^{+\infty}\left( \Psi^*\dfrac{\partial \Psi}{\partial x}-\dfrac{\partial \Psi^*}{\partial x}\Psi \right)\,dx \\ &= -\dfrac{i\hbar}{m}\int_{-\infty}^{+\infty}\left( \Psi^*\dfrac{\partial \Psi}{\partial x}\right)\,dx \end{aligned}}

where we have used integration by parts and the fact that the wave function has to be square integrable which is to say that the function is vanishingly small as {x} approaches infinity.

(Allow me to go on a tangent here but I just want to say that rigorously speaking the Hilbert space isn’t the best mathematical space to construct the mathematical formalism of quantum mechanics. The problem with the Hilbert space approach to quantum mechanics is two fold:

  1. the functions that are in Hilbert space are necessarily square integrable. The problem is that many times we need to calculate quantities that depend not on a given function but on its derivative (for example), but just because a function is square integrable it doesn’t mean that its derivative also is. Hence we don’t have any mathematical guarantee that most of the integrals that we are computing actually converge.
  2. The second problem is that when we are dealing with continuous spectra (later on we’ll see what this means) the eigenfunctions (we’ll see what this means) are divergent

The proper way of doing quantum mechanics is by using rigged Hilbert spaces. A good first introduction to rigged Hilbert spaces and their use in Quantum Mechanics is given by Rafael de la Madrid in the article The role of the rigged Hilbert space in Quantum Mechanics )

The previous equation doesn’t express the average velocity of a quantum particle. In our construction of quantum mechanic nothing allows us to talk about the velocity of particle. In fact we don’t even know what the meaning of

velocity of a particle

is in quantum mechanics!

Since a particle doesn’t have a definitive position prior to is measurement it also can’t have a well defined velocity. Later on we’ll see how how to construct the probability density for velocity in the state {\Psi}.

For the purposes of the present section we’ll just postulate that the expectation value of the velocity is equal to the time derivative of the expectation value of position.

\displaystyle  <v>=\dfrac{d<x>}{dt} \ \ \ \ \ (24)

As we saw in the lagrangian formalism and in the hamiltonian formalism posts of our blog it is more customary (since it is more powerful) to work with momentum instead of velocity. Since {p=mv} the relevant equation for momentum is;

\displaystyle  <p>=m\dfrac{d<x>}{dt}=-i\hbar\int_{-\infty}^{+\infty}\left( \Psi^*\dfrac{\partial \Psi}{\partial x}\right)\,dx \ \ \ \ \ (25)

Since {x} represents the position operator operator we can say in an analogous way that

\displaystyle \frac{\hbar}{i}\frac{\partial}{\partial x}

represents the momentum operator. A way to see why this definition makes sense is to rewrite the definition of the expectation value of the position

\displaystyle  <x>=\int \Psi^* x \Psi \, dx

and to rewrite equation 25 in a more compelling way

\displaystyle  <p> = \int \Psi^*\left( \frac{\hbar}{i}\frac{\partial}{\partial x} \right) \Psi \, dx

After knowing how to calculate the expectation value of these two dynamical quantities the question now is how can one calculate the expectation value of other dynamical quantities of interest?

The thing is that all dynamical quantities can be expressed as functions of of {x} and {p}. Taking this into account one just has to write the appropriate function of the quantity of interest in terms of {p} and {x} and then calculate the expectation value.

In a more formal (hence more respectable) way the equation for the expectation value of the dynamical quantity {Q=Q(x,p)} is

\displaystyle  <Q(x,p)>=\int\Psi^*Q\left( x,\frac{\hbar}{i}\frac{\partial}{\partial x} \right)\Psi\, dx \ \ \ \ \ (26)

As an example let us look into what would be the relevant expression for the kinetic energy the relevant definition can be found at Newtonian Mechanics 01. Henceforth we’ll use {T} to denote the kinetic energy instead of {K} in order to use the same notation that is used in Introduction to Quantum Mechanics (2nd Edition).

\displaystyle T=\frac{1}{2}mv^2=\frac{p^2}{2m}

Hence the expectation value is

\displaystyle  <T>=-\frac{\hbar ^2}{2m}\int\Psi^*\frac{\partial ^2\Psi}{\partial x^2}\, dx \ \ \ \ \ (27)

Exercise 3 Why can’t you do integration by parts directly on

\displaystyle  \frac{d<x>}{dt}=\int x\frac{\partial}{\partial t}|\Psi|^2 \, dx

pull the time derivative over onto {x}, note that {\partial x/\partial t=0} and conclude that {d<x>/dt=0}?

Because integration by parts can only be used when the differentiation and integration are done with the same variable.

Exercise 4 Calculate

\displaystyle \frac{d<p>}{dt}

First lets us remember the the Schroedinger equation:

\displaystyle  \frac{\partial \Psi}{\partial t}=\frac{i\hbar}{2m}\frac{\partial^2\Psi}{\partial x^2}-\frac{i}{\hbar}V\Psi \ \ \ \ \ (28)

And its complex conjugate

\displaystyle  \frac{\partial \Psi^*}{\partial t}=-\frac{i\hbar}{2m}\frac{\partial^2\Psi^*}{\partial x^2}+\frac{i}{\hbar}V\Psi^* \ \ \ \ \ (29)

for the time evolution of the expectation value of momentum is

{\begin{aligned} \dfrac{d<p>}{dt} &= \dfrac{d}{dt}\int\Psi ^* \dfrac{\hbar}{i}\dfrac{\partial \Psi}{\partial x}\, dx\\ &= \dfrac{\hbar}{i}\int \dfrac{\partial}{\partial t}\left( \Psi ^* \dfrac{\partial \Psi}{\partial x}\right)\, dx\\ &= \dfrac{\hbar}{i}\int\left( \dfrac{\partial \Psi^*}{\partial t}\dfrac{\partial \Psi}{\partial x}+\Psi^* \dfrac{\partial}{\partial x}\dfrac{\partial \Psi}{\partial t} \right) \, dx\\ &= \dfrac{\hbar}{i}\int \left[ \left( -\dfrac{i\hbar}{2m}\dfrac{\partial^2\Psi^*}{\partial x^2}+\dfrac{i}{\hbar}V\Psi^* \right)\dfrac{\partial \Psi}{\partial x} + \Psi^*\dfrac{\partial}{\partial x}\left( \dfrac{i\hbar}{2m}\dfrac{\partial^2\Psi}{\partial x^2}-\dfrac{i}{\hbar}V\Psi \right)\right]\, dx\\\ &= \dfrac{\hbar}{i}\int \left[ -\dfrac{i\hbar}{2m}\left(\dfrac{\partial^2\Psi^*}{\partial x^2}\dfrac{\partial\Psi}{\partial x}-\Psi^*\dfrac{\partial ^3 \Psi}{\partial x^3} \right)+\dfrac{i}{\hbar}\left( V\Psi ^*\dfrac{\partial\Psi}{\partial x}-\Psi ^*\dfrac{\partial (V\Psi)}{\partial x}\right)\right]\, dx \end{aligned}}

First we’ll calculate the first term of the integral (ignoring the constant factors) doing integration by parts (remember that the boundary terms are vanishing) two times

{\begin{aligned} \int \left(\dfrac{\partial^2\Psi^*}{\partial x^2}\dfrac{\partial\Psi}{\partial x}-\Psi^*\dfrac{\partial ^3 \Psi}{\partial x^3}\right)\, dx &= \left[ \dfrac{\partial \Psi^*}{\partial x^2} \dfrac{\partial \Psi}{\partial x}\right]-\int\dfrac{\partial \Psi^*}{\partial x}\dfrac{\partial ^2 \Psi}{\partial x^2}\, dx- \int \Psi^*\dfrac{\partial ^3 \Psi}{\partial x^3}\, dx \\ &=-\left[ \Psi ^*\dfrac{\partial ^2 \Psi}{\partial x^2} \right]+\int \Psi^*\dfrac{\partial ^3 \Psi}{\partial x^3}\, dx - \int \Psi^*\dfrac{\partial ^3 \Psi}{\partial x^3}\, dx \\ &= 0 \end{aligned}}

Then we’ll calculate the second term of the integral

{\begin{aligned} \int \left( V\Psi ^*\dfrac{\partial\Psi}{\partial x}-\Psi ^*\dfrac{\partial (V\Psi)}{\partial x} \right)\, dx &= \int \left( V\Psi ^*\dfrac{\partial\Psi}{\partial x}-\Psi ^* \dfrac{\partial V}{\partial x}\Psi-\Psi ^*V\dfrac{\partial \Psi}{\partial x} \right)\, dx\\ &= -\int\Psi ^* \dfrac{\partial V}{\partial x}\Psi\, dx\\ &=<-\dfrac{\partial V}{\partial x}> \end{aligned}}

In conclusion it is

\displaystyle  \frac{d<p>}{dt}=<-\dfrac{\partial V}{\partial x}> \ \ \ \ \ (30)

Hence the expectation value of the momentum operator obeys Newton’s Second Axiom. The previous result can be generalized and its generalization is known in the Quantum Mechanics literature as Ehrenfest’s theorem

The Wave Function 03

— 1.4. Normalization —

The Scroedinger equation is a linear partial differential equation. As such, if {\Psi(x,t)} is a solution to it, then {A\Psi(x,t)} (where {A} is a complex constant) also is a solution.

Does this mean that a physical problem has an infinite number of solutions in Quantum Mechanics? It doesn’t! The thing is that besides the The Scroedinger equation one also has condition 11 to take into account. Stating 11 for the wave function:

\displaystyle \int_{-\infty}^{+\infty}|\Psi (x,t)|^2dx=1 \ \ \ \ \ (15)

 

The previous equations states the quite obvious fact that the particle under study has to be in some place at a given instant.

Since {A} was a complex constant the normalization condition fixes {A} in absolute value but can’t tell us nothing regarding its phase. Apparently once again one is haunted with the perspective of having an infinite number of solutions to any given physical problem. The things is that this time the phase doesn’t carry any physical significance (a fact that will be demonstrated later) and thus we actually have just one physical solution.

In the previous discussion one is obviously assuming that the wave function is normalizable. That is to say that the function doesn’t blow up and vanishes quickly enough at infinity so that the integral being computed makes sense.

At this level it is customary to say that these wave functions don’t represent physical states but that isn’t exactly true. A wave function that isn’t normalizable because integral is infinite might represent a beam of particles in a scattering experiment. The fact that the integral diverges to infinity can then be said to represent the fact that beam is composed by an infinite amount of particles.

While the identically null wave function represents the absence of particles.

A question that now arises has to do with the consistency of our normalization and this is a very sensible question. The point is that we normalize the Schroedinger equation for a given time instant, so how does one know that the normalization holds for other times?

Let us look into the time evolution of our normalization condition 15.

\displaystyle \frac{d}{dt}\int_{-\infty}^{+\infty}|\Psi (x,t)|^2dx=\int_{-\infty}^{+\infty}\frac{\partial}{\partial t}|\Psi (x,t)|^2dx \ \ \ \ \ (16)

 

Calculating the derivative under the integral for the right hand side of the previous equation

{\begin{aligned} \frac{\partial}{\partial t}|\Psi (x,t)|^2&=\frac{\partial}{\partial t}(\Psi^* (x,t)\Psi (x,t))\\ &=\Psi^* (x,t)\frac{\partial\Psi (x,t)}{\partial t}+\frac{\partial \Psi^* (x,t)}{\partial t}\Psi (x,t) \end{aligned}}

The complex conjugate of the Schroedinger equation is

\displaystyle \frac{\partial \Psi^*(x,t)}{\partial t}=-\frac{i\hbar}{2m}\frac{\partial^2\Psi^*(x,t)}{\partial x^2}+\frac{i}{\hbar}V\Psi^*(x,t) \ \ \ \ \ (17)

 

Hence for the derivative under the integral

{\begin{aligned} \frac{\partial}{\partial t}|\Psi (x,t)|^2&=\frac{\partial}{\partial t}(\Psi^* (x,t)\Psi (x,t))\\ &=\Psi^* (x,t)\frac{\partial\Psi (x,t)}{\partial t}+\frac{\partial \Psi^* (x,t)}{\partial t}\Psi (x,t)\\ &=\frac{i\hbar}{2m}\left( \Psi^*(x,t)\frac{\partial^2\Psi(x,t)}{\partial x^2}-\frac{\partial^2\Psi^*(x,t)}{\partial x^2}\Psi (x,t)\right)\\ &=\frac{\partial}{\partial x}\left[ \frac{i\hbar}{2m}\left( \Psi^*(x,t)\frac{\partial\Psi(x,t)}{\partial x}-\frac{\partial\Psi^*(x,t)}{\partial x}\Psi(x,t) \right) \right] \end{aligned}}

Getting back to 16

\displaystyle \frac{d}{dt}\int_{-\infty}^{+\infty}|\Psi (x,t)|^2dx=\frac{i\hbar}{2m}\left[ \Psi^*(x,t)\frac{\partial\Psi(x,t)}{\partial x}-\frac{\partial\Psi^*(x,t)}{\partial x}\Psi(x,t) \right]_{-\infty}^{+\infty} \ \ \ \ \ (18)

 

Since we’re assuming that our wave function is normalizable the wave function (and its complex conjugate) must vanish for {+\infty} and {-\infty}.

In conclusion

\displaystyle \frac{d}{dt}\int_{-\infty}^{+\infty}|\Psi (x,t)|^2dx=0

Since the derivative vanishes one can conclude that the integral is constant.

In conclusion one can say that if one normalizes the wave equation for a given time interval it stays normalized for all time intervals.

Exercise 1 At time {t=0} a particle is represented by the wave function

\displaystyle \Psi(x,0)=\begin{cases} Ax/a & \text{if } 0\leq x\leq a\\ A(b-x)/(b-a) & \text{if } a\leq x\leq b \\ 0 & \text{otherwise}\end{cases} \ \ \ \ \ (19)

 

where {A}, {a} and {b} are constants.

  1. Normalize {\Psi}.

{\begin{aligned} 1&=\int_{-\infty}^{+\infty} |\Psi|^2\,dx\\ &=\int_0^a|\Psi|^2\,dx+\int_a^b|\Psi|^2\,dx\\ &=\dfrac{|A|^2}{a^2}\int_0^a|x^2\,dx+\dfrac{|A|^2}{(b-a)^2}\int_a^b(b-x)^2\,dx\\ &=\dfrac{|A|^2}{a^2}\left[ \dfrac{x^3}{3} \right]_0^a+\dfrac{|A|^2}{(b-a)^2}\left[ \dfrac{(b-x)^3}{3} \right]_a^b\\ &=\dfrac{|A|^2a}{3}+\dfrac{|A|^2}{(b-a)^2}\dfrac{(b-a)^3}{3}\\ &=\dfrac{|A|^2a}{3}+|A|^2\dfrac{b-a}{3}\\ &=\dfrac{b|A|^2}{3} \end{aligned}}

Hence for {A} it is

\displaystyle A=\sqrt{\dfrac{3}{b}}

  • Sketch {\Psi(x,0)}In {0\leq x \leq a} {\Psi(x,0)} is a strictly increasing function that goes from {0} to {A}.In {a \leq x \leq b} {\Psi(x,0)} is strictly decreasing function that goes from {A} to {0}.Hence the plot of {\Psi(x,0)} is (choosing the following values {a=1}, {b=2} and {A=\sqrt{b}=\sqrt{2}}):

     

  • Where is the particle most likely to be found at {t=0}? Since {x=a} is maximum of the {\Psi} function the most likely value for the particle to be found is at {x=a}.
  • What is the probability of finding the particle to the left of {a}? Check the answers for {b=a} and {b=2a}.{\begin{aligned} P(x<a)&=\int_0^a|\Psi|^2\,dx\\ &=\dfrac{|A|^2}{a^2}\int_0^a x^2\,dx\\ &=\dfrac{|A|^2}{a^2}\left[ \dfrac{x^3}{3} \right]_0^a\\ &=\dfrac{|A|^2}{3}a\\ &=\dfrac{3}{3b}a\\ &=\dfrac{a}{b} \end{aligned}}At first let us look into the {b=a} limiting case. We can imagine that this is the end result of {b} getting nearer and nearer to {a}. That is to say that the domain of the strictly decreasing part of {\Psi(x,0)} is getting shorter and shorter and when finally {b=a} {\Psi(x,0)} doesn’t have a domain where its is strictly decreasing and {\Psi(x,0)} is defined by its strictly increasing and vanishing features (in the appropriate domains). That is to say that to the right of {a} the function is {0}. Hence the probability of the particle being found to the left of {a} is {1}.From the previous calculation {P(x<a)_{b=a}=1} which is indeed the correct result.

    The {b=2a} case can be analyzed in a different way. In this case:

    • {x=a} is the half point of the domain of {\Psi(x,0)} where {\Psi(x,0)} is non vanishing (end points of the domain are excluded).
    • {\Psi(x,0)} is strictly increasing in the first half of the domain ({0\leq x\leq a}).
    • {\Psi(x,0)} is strictly decreasing in the second half of the domain ({a\leq x\leq b}).
    • {\Psi(x,0)} is continuous.

    Thus one can conclude that {\Psi(x,0)} is symmetric around {a} and consequently the probability of the particle being found to the left of {a} has to be {1/2}.

    From the previous calculation {P(x<a)_{b=2a}=1/2} which is indeed the correct result.

  • What is the expectation value of {x}?{\begin{aligned} <x>&= \int_a^b x|\Psi|^2\,dx\\ &=\dfrac{|A|^2}{a^2}\int_0^a x^3\,dx+\dfrac{|A|^2}{(b-a)^2}\int_a^b x(b-x)^2\,dx\\ &=\dfrac{|A|^2}{a^2}\left[ \dfrac{x^4}{4} \right]_0^a+\dfrac{|A|^2}{(b-a)^2}\left[ 1/2x^2b^2-2/3x^3b+x^4/4 \right]_a^b\\ &=\dfrac{2a+b}{4} \end{aligned}}
Exercise 2 Consider the wave function

\displaystyle \Psi(x,t)=Ae^{-\lambda |x|}e^{-i\omega t} \ \ \ \ \ (20)

 

where {A}, {\lambda} and {\omega} are positive real constants.

  1. Normalize {\Psi}

{\begin{aligned} 1&=\int_{-\infty}^{+\infty} |\Psi|^2\,dx\\ &=\int_{-\infty}^{+\infty} |A|^2e^{-2\lambda |x|}\,dx\\ &=2|A|^2\int_0^{+\infty}e^{-2\lambda |x|}\,dx \\ &=2|A|^2\int_0^{+\infty}e^{-2\lambda x}\,dx \\ &=-\dfrac{|A|^2}{\lambda}\left[ e^{-2\lambda x} \right]_0^{+\infty}\\ &=\dfrac{|A|^2}{\lambda} \end{aligned}}

Hence it is

\displaystyle A=\sqrt{\lambda}

  • Determine {<x>} and {<x^2>}{\begin{aligned} <x>&=\int_{-\infty}^{+\infty} x|\Psi|^2\,dx\\ &=|A|^2\int_{-\infty}^{+\infty} xe^{-2\lambda |x|}\,dx\\ &=0 \end{aligned}}The integral is vanishing because we’re calculating the integral of an odd function between symmetrical limits.{\begin{aligned} <x^2>&=\int_{-\infty}^{+\infty} x^2|\Psi|^2\,dx\\ &=2\lambda\int_0^{+\infty} x^2e^{-2\lambda x}\,dx\\ &=2\lambda\int_0^{+\infty} \dfrac{1}{4}\dfrac{\partial^2}{\partial \lambda ^2}\left( e^{-2\lambda x} \right)\,dx\\ &=\dfrac{\lambda}{2} \dfrac{\partial^2}{\partial \lambda ^2}\int_0^{+\infty}e^{-2\lambda\,dx} x \,dx\\ &= \dfrac{\lambda}{2} \dfrac{\partial^2}{\partial \lambda ^2} \left[ -\dfrac{e^{-2\lambda\,dx}}{2\lambda} \right]_0^{+\infty}\\ &= \dfrac{\lambda}{2}\dfrac{\partial^2}{\partial \lambda ^2}\left(\dfrac{1}{2\lambda} \right)\\ &=\dfrac{\lambda}{2}\dfrac{\partial}{\partial \lambda}\left(-\dfrac{1}{\lambda ^2} \right)\\ &=\dfrac{\lambda}{2}\dfrac{1}{\lambda^3}\\ &= \dfrac{1}{2\lambda^2} \end{aligned}}
  • Find the standard deviation of {x}. Sketch the graph of {\Psi ^2}. What is the probability that the particle will be found outside the range {[<x>-\sigma,<x>+\sigma]}?

    \displaystyle \sigma ^2=<x^2>-<x>^2=\frac{1}{2\lambda ^2}-0=\frac{1}{2\lambda ^2}

    Hence the standard deviation is

    \displaystyle \sigma=\dfrac{\sqrt{2}}{2\lambda}

    The square of the wave function is proportional to {e^{-2\lambda |x|}}. Dealing for piecewise definitions of the square of the wave function, its first derivative in order to {x} and its second derivative in order to {x}

    \displaystyle |\Psi|^2=\begin{cases} e^{2\lambda x} & \text{if } x < 0\\ e^{-2\lambda x} & \text{if } x \geq 0 \end{cases} \ \ \ \ \ (21)

     

    \displaystyle \dfrac{\partial}{\partial x}|\Psi|^2=\begin{cases} 2\lambda e^{2\lambda x} & \text{if } x < 0\\ -2\lambda e^{-2\lambda x} & \text{if } x \geq 0 \end{cases} \ \ \ \ \ (22)

     

    \displaystyle \dfrac{\partial ^2}{\partial x ^2}|\Psi|^2=\begin{cases} 4\lambda ^2 e^{2\lambda x} & \text{if } x < 0\\ 4\lambda ^2 e^{-2\lambda x} & \text{if } x \geq 0 \end{cases} \ \ \ \ \ (23)

     

    As we can see the first derivative of {|\Psi|^2} changes its sign on {0} from positive to negative. Hence it was strictly increasing before {0} and it is strictly decreasing after {0}. Hence {0} is a maximum of {|\Psi|^2}.

    The second derivative is always positive so {|\Psi|^2} is always concave up (convex).

    Hence its graphical representation is:

    The probability that the particle is to be found outside the range {[<x>-\sigma, <x>+\sigma ]} is

    {\begin{aligned} P(<x>-\sigma, <x>+\sigma)&= 2\int_\sigma^{+\infty}|\Psi|^2\,dx\\ &= 2\lambda\int_\sigma^{+\infty}e^{2\lambda x}\\ &= 2\lambda\left[ -\dfrac{e^{2\lambda x}}{2\lambda} \right]_\sigma^{+\infty}\\ &=\lambda \dfrac{e^{2\lambda x}}{2\lambda}\\ &=e^{-2\lambda\dfrac{\sqrt{2}}{2\lambda}}\\ &=e^{-\sqrt{2}} \end{aligned}}