The Wave Function Exercises 01


Exercise 1

  • {<j>^2=21^2=441}

    {<j^2>=1/N\sum j^2N(J)=\dfrac{6434}{14}=459.6}

  • Calculating for each {\Delta j}
    {j} {\Delta j=j-<j>}
    14 {14-21=-7}
    15 {15-21=-6}
    16 {16-21=-5}
    22 {22-21=1}
    24 {24-21=3}
    25 {25-21=4}

    Hence for the variance it follows

    {\sigma ^2=1/N\sum (\Delta j)^2N(j)=\dfrac{260}{14}=18.6}

    Hence the standard deviation is

    \displaystyle \sigma =\sqrt{18.6}=4.3

  • {\sigma^2=<j^2>-<j>^2=459.6-441=18.6}

    And for the standard deviation it is

    \displaystyle \sigma =\sqrt{18.6}=4.3

    Which confirms the second equation for the standard deviation.

Exercise 2 Consider the first {25} digits in the decimal expansion of {\pi}.

  • What is the probability of getting each of the 10 digits assuming that one selects a digit at random.

    The first 25 digits of the decimal expansion of {\pi} are

    \displaystyle \{3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5, 8, 9, 7, 9, 3, 2, 3, 8, 4, 6, 2, 6, 4, 3\}

    Hence for the digits it is

    {N(0)=0} {P(0)=0}
    {N(1)=2} {P(1)=2/25}
    {N(2)=3} {P(2)=3/25}
    {N(3)=5} {P(3)=1/5}
    {N(4)=3} {P(4)=3/25}
    {N(5)=3} {P(5)=3/25}
    {N(6)=3} {P(6)=3/25}
    {N(7)=1} {P(7)=1/25}
    {N(8)=2} {P(8)=2/25}
    {N(9)=3} {P(9)=3/25}
  • The most probable digit is {5}. The median digit is {4}. The average is {\sum P(i)N(i)=4.72}.

  • {\sigma=2.47}

Exercise 3 The needle on a broken car is free to swing, and bounces perfectly off the pins on either end, so that if you give it a flick it is equally likely to come to rest at any angle between {0} and {\pi}.

  • Along the {\left[0,\pi\right]} interval the probability of the needle flicking an angle {d\theta} is {d\theta/\pi}. Given the definition of probability density it is {\rho(\theta)=1/\pi}.

    Additionally the probability density also needs to be normalized.

    \displaystyle \int_0^\pi \rho(\theta)d\theta=1\Leftrightarrow\int_0^\pi 1/\pi d\theta=1

    which is trivially true.

    The plot for the probability density is

    NeedleProbabilityDensity

  • Compute {\left\langle\theta \right\rangle}, {\left\langle\theta^2 \right\rangle} and {\sigma}.

    {\begin{aligned} \left\langle\theta \right\rangle &= \int_0^\pi\frac{\theta}{\pi}d\theta\\ &= \frac{1}{\pi}\int_0^\pi\theta d\theta\\ &= \frac{1}{\pi} \left[ \frac{\theta^2}{2} \right]_0^\pi\\ &= \frac{\pi}{2} \end{aligned}}

    For {\left\langle\theta^2 \right\rangle} it is

    {\begin{aligned} \left\langle\theta^2 \right\rangle &= \int_0^\pi\frac{\theta^2}{\pi}d\theta\\ &= \frac{1}{\pi} \left[ \frac{\theta^3}{3} \right]_0^\pi\\ &= \frac{\pi^2}{3} \end{aligned}}

    The variance is {\sigma^2=\left\langle\theta^2 \right\rangle-\left\langle\theta\right\rangle^2 =\dfrac{\pi^2}{3}-\dfrac{\pi^2}{4}=\dfrac{\pi^22}{12}}.

    And the standard deviation is {\sigma=\dfrac{\pi}{2\sqrt{3}}}.

  • Compute {\left\langle\sin\theta\right\rangle}, {\left\langle\cos\theta\right\rangle} and {\left\langle\cos^2\theta\right\rangle}.

    {\begin{aligned} \left\langle\sin\theta \right\rangle &= \int_0^\pi\frac{\sin\theta}{\pi}d\theta\\ &= \frac{1}{\pi}\int_0^\pi\sin\theta d\theta\\ &= \frac{1}{\pi} \left[ -\cos\theta \right]_0^\pi\\ &= \frac{2}{\pi} \end{aligned}}

    and

    {\begin{aligned} \left\langle\cos\theta \right\rangle &= \int_0^\pi\frac{\cos\theta}{\pi}d\theta\\ &= \frac{1}{\pi}\int_0^\pi\cos\theta d\theta\\ &= \frac{1}{\pi} \left[ \sin\theta \right]_0^\pi\\ &= 0 \end{aligned}}

    We’ll leave {\left\langle\cos\theta^2 \right\rangle} as an exercise for the reader. As a hint remember that {\cos^2\theta=\dfrac{1+\cos(2\theta)}{2}}.

Exercise 4

  • In exercise {1.1} it was shown that the the probability density is

    \displaystyle  \rho(x)=\frac{1}{2\sqrt{hx}}

    Hence the mean value of {x} is

    {\begin{aligned} \left\langle x \right\rangle &= \int_0^h\frac{x}{2\sqrt{hx}}dx\\ &= \frac{h}{3} \end{aligned}}

    For {\left\langle x^2 \right\rangle} it is

    {\begin{aligned} \left\langle x^2 \right\rangle &= \int_0^h\frac{x}{2\sqrt{hx}}dx\\ &= \frac{1}{2\sqrt{h}}\int_0^h x^{3/2}dx\\ &= \frac{1}{2\sqrt{h}}\left[\frac{2}{5}x^{5/2} \right]_0^h\\ &= \frac{h^2}{5} \end{aligned}}

    Hence the variance is

    \displaystyle \sigma^2=\left\langle x^2 \right\rangle-\left\langle x \right\rangle^2=\frac{h^2}{5}-\frac{h^2}{9}=\frac{4}{45}h^2

    and the standard deviation is

    \displaystyle \sigma=\frac{2h}{3\sqrt{5}}

  • For the distance to the mean to be more than one standard deviation away from the average we have two alternatives. The first is the interval {\left[0,\left\langle x \right\rangle+\sigma\right]} and the second is {\left[\left\langle x \right\rangle+\sigma,h\right]}.

    Hence the total probability is the sum of these two probabilities.

    Let {P_1} denote the probability of the first interval and {P_2} denote the probability of the second interval.

    {\begin{aligned} P_1 &= \int_0^{\left\langle x \right\rangle-\sigma}\frac{1}{2\sqrt{hx}}dx\\ &= \frac{1}{2\sqrt{h}}\left[2x^{1/2} \right]_0^{\left\langle x \right\rangle-\sigma}\\ &= \frac{1}{\sqrt{h}}\sqrt{\frac{h}{3}-\frac{2h}{3\sqrt{5}}}\\ &=\sqrt{\frac{1}{3}-\frac{2}{3\sqrt{5}}} \end{aligned}}

    Now for the second interval it is

    {\begin{aligned} P_2 &= \int_{\left\langle x \right\rangle+\sigma}^h\frac{1}{2\sqrt{hx}}dx\\ &= \ldots\\ &=1-\sqrt{\frac{1}{3}+\frac{2}{3\sqrt{5}}} \end{aligned}}

    Hence the total probability {P} is {P=P_1+P_2}

    {\begin{aligned} P&=P_1+P_2\\ &= \sqrt{\frac{1}{3}-\frac{2}{3\sqrt{5}}}+1-\sqrt{\frac{1}{3}+\frac{2}{3\sqrt{5}}}\\ &\approx 0.3929 \end{aligned}}

Exercise 5 The probability density is {\rho(x)=Ae^{-\lambda(x-a)^2}}

  • Determine {A}.

    Making the change of variable {u=x-a} ({dx=du}) the normalization condition is

    {\begin{aligned} 1 &= A\int_{-\infty}^\infty e^{-\lambda u^2}du\\ &= A\sqrt{\frac{\pi}{\lambda}} \end{aligned}}

    Hence for {A} it is

    \displaystyle A=\sqrt{\frac{\lambda}{\pi}}

  • Find {\left\langle x \right\rangle}, {\left\langle x^2 \right\rangle} and {\sigma}.

    {\begin{aligned} \left\langle x \right\rangle &= \sqrt{\frac{\lambda}{\pi}}\int_{-\infty}^\infty (u+a)e^{-\lambda u^2}du\\ &= \sqrt{\frac{\lambda}{\pi}}\left(\int_{-\infty}^\infty ue^{-\lambda u^2}du+a\int_{-\infty}^\infty e^{-\lambda u^2}du \right)\\ &=\sqrt{\frac{\lambda}{\pi}}\left( 0+a\sqrt{\frac{\pi}{\lambda}} \right)\\ &= a \end{aligned}}

    If you don’t see why {\displaystyle\int_{-\infty}^\infty ue^{-\lambda u^2}du=0} check this post on my other blog.

    For {\left\langle x^2 \right\rangle} it is

    {\begin{aligned} \left\langle x^2 \right\rangle &= \sqrt{\frac{\lambda}{\pi}}\int_{-\infty}^\infty (u+a)^2e^{-\lambda u^2}du\\ &= \sqrt{\frac{\lambda}{\pi}}\left(\int_{-\infty}^\infty u^2e^{-\lambda u^2}du+2a\int_{-\infty}^\infty u e^{-\lambda u^2}du+a^2\int_{-\infty}^\infty e^{-\lambda u^2}du \right) \end{aligned}}

    Now {\displaystyle 2a\int_{-\infty}^\infty u e^{-\lambda u^2}du=0} as in the previous calculation.

    For the third term it is {\displaystyle a^2\int_{-\infty}^\infty e^{-\lambda u^2}du=a^2\sqrt{\frac{\pi}{\lambda}}}.

    The first integral is the hard one and a special technique can be employed to evaluate it.

    {\begin{aligned} \int_{-\infty}^\infty u^2e^{-\lambda u^2}du &= \int_{-\infty}^\infty-\frac{d}{d\lambda}\left( e^{-\lambda u^2} \right)du\\ &= -\frac{d}{d\lambda}\int_{-\infty}^\infty e^{-\lambda u^2}du\\ &=-\frac{d}{d\lambda}\sqrt{\frac{\pi}{\lambda}}\\ &=\frac{1}{2}\sqrt{\frac{\pi}{\lambda^3}} \end{aligned}}

    Hence it is

    {\begin{aligned} \left\langle x^2 \right\rangle &= \sqrt{\frac{\lambda}{\pi}}\int_{-\infty}^\infty (u+a)^2e^{-\lambda u^2}du\\ &= \sqrt{\frac{\lambda}{\pi}}\left(\int_{-\infty}^\infty u^2e^{-\lambda u^2}du+2a\int_{-\infty}^\infty u e^{-\lambda u^2}du+a^2\int_{-\infty}^\infty e^{-\lambda u^2}du \right)\\ &= \sqrt{\frac{\lambda}{\pi}}\left( \frac{1}{2}\sqrt{\frac{\pi}{\lambda^3}}+0+a^2\sqrt{\frac{\pi}{\lambda}} \right)\\ &=a^2+\frac{1}{2\lambda} \end{aligned}}

    The variance is

    \displaystyle  \sigma^2=\left\langle x^2 \right\rangle-\left\langle x \right\rangle^2=\frac{1}{2\lambda}

    Hence the standard deviation is

    \displaystyle  \sigma=\frac{1}{\sqrt{2\lambda}}

— Mathematica file —

The resolution of exercise 2 was done using some basic Mathematica code which I’ll post here hoping that it can be helpful to the readers of this blog.

// N[Pi, 25]

piexpansion = IntegerDigits[3141592653589793238462643]

digitcount = {}

For[i = 0, i <= 9, i++, AppendTo[digitcount, Count[A, i]]]

digitcount

digitprobability = {}

For[i = 0, i <= 9, i++, AppendTo[digitprobability, Count[A, i]/25]]

digitprobability

digits = {}

For[i = 0, i <= 9, i++, AppendTo[digits, i]]

digits

j = N[digits.digitprobability]

digitssquared = {}

For[i = 0, i <= 9, i++, AppendTo[digitssquared, i^2]]

digitssquared

jsquared = N[digitssquared.digitprobability]

sigmasquared = jsquared - j^2

std = Sqrt[sigmasquared]

deviations = {}

deviations = piexpansion - j

deviationssquared = (piexpansion - j)^2

variance = Mean[deviationssquared]

standarddeviation = Sqrt[variance]

Advertisements

The Wave Function 02

— 1.3. Probability —

In the previous post we were introduced to the Schrodinger equation (equation 1), stated Born’s interpretation of what is the physical meaning of the wave function and took a little glimpse into some philosophical positions one might have regarding Quantum Mechanics.

Since probability plays such an essential role in Quantum Mechanics it seems that a brief revision of some of its concepts is in order so that we are sure that we have the tools that allows one to do Quantum Mechanics.

— 1.3.1. Discrete variables —

The example used in the book in order to expound on the terminology and concepts of probability is of a set that consists of 14 people in a class room:

  • one person has 14 years
  • one person has 15 years
  • three people have 16 years
  • two people 22 years
  • five people have 25 years

Let {N(j)} represent the number of people with age {j}. Hence

  • {N(14)=1}
  • {N(15)=1}
  • {N(16)=3}
  • {N(22)=2}
  • {N(25)=5}

One can represent the previous data points by use of a histogram:

AgeHistogram

The the total number of people in the room is given by

\displaystyle N=\sum_{j=0}^{\infty}N(j) \ \ \ \ \ (2)

 

Adopting a frequentist definition of probability Griffiths then makes a number of definitions of probability concepts under the assumption that the phenomena at study are discrete ones.

Definition 1 The probability of an event {j}, {P(j)} is proportional to the number elements that have the property {j} and inversely proportional to the total elements ({N}) under study.

\displaystyle P(j)=\frac{N(j)}{N} \ \ \ \ \ (3)

 

It is easy to see that from equation 3 together with equation 2 it follows

\displaystyle \sum_{j=0}^{\infty}P(j)=1 \ \ \ \ \ (4)

 

After defining {P(j)} we can also define what is the most probable value for {j}.

Definition 2 The most value for {j} is the one for which {P(j)} is a maximum.
Definition 3 The average value of {j} is given by

\displaystyle <j>=\sum_{j=0}^{\infty}jP(j) \ \ \ \ \ (5)

 

But what if we are interested in computing the average value of {j^2}? Then the appropriate expression must be

\displaystyle <j>=\sum_{j=0}^{\infty}j^2P(j)

Hence one can write with full generality that average value for some function of {j}, denoted by {f(j)} is given by

\displaystyle <f(j)>=\sum_{j=0}^{\infty}f(j)P(j) \ \ \ \ \ (6)

 

After introducing the definition of maximum of a probability distribution it is time to introduce a couple of definitions that relate t the symmetry and spread of a distribution.

Definition 4 The median is the value of {j} for which the probability of having a larger value than {j} is the same as the probability of having a value with a smaller value than {j}.

After seeing a definition that relates to the the symmetry of a distribution we’ll introduce a definition that is an indication of its spread.

But first we’ll look at two examples that will serve as a motivation for that:

Example1

and

Example2

Both histograms have the same median, the same average, the same most probable value and the same number of elements. Nevertheless it is visually obvious that the two histograms represent two different kinds of phenomena.

The first histogram represents a phenomenon whose values are sharply peaked about the average (central) value.

The second histogram on the other hand represents a phenomenon represents a more broad and flat distribution.

The existence of such a difference in two otherwise equal distributions introduces the necessity of introducing a measure of spread.

A first thought could be to define the difference about the average for each individual value

\displaystyle \Delta j=j-<j>

This approach doesn’t work since that for random distributions one would expect to find equally positive and negative values for {\Delta j}.

One way to circumvent this issue would be to use {|\Delta j|}, and even though this approach does work theoretically it has the problem of not using a differentiable function.

These two issues are avoided if one uses the squares of the deviations about the average.

The quantity of interest in called the variance of the distribution.

Definition 5 The variance of a distribution ,{\sigma ^2}, that has an average value is given by the expression

\displaystyle \sigma ^2=<(\Delta j)^2> \ \ \ \ \ (7)

 

Definition 6 The standard deviation, {\sigma}, of a distribution is given by the square root of its variance.

For the variance it is

\displaystyle \sigma ^2=<j^2>-<j>^2 \ \ \ \ \ (8)

 

Since by definition 5 the variance is manifestly non-negative then it is

\displaystyle <j^2> \geq <j>^2 \ \ \ \ \ (9)

 

where equality only happens when the distribution is composed of equal elements and equal elements only.

— 1.3.2. Continuous variables —

Thus far we’ve always assumed that we are dealing with discrete variables. To generalize our definitions and results to continuous distributions.

One has to have the initial care to note that when dealing with phenomena that allow for a description that is continuous probabilities of finding a given value are vanishing, and that one should talk about the probability of a given interval.

With that in mind and assuming that the distributions are sufficiently well behaved one has that the probability of and event being between {x} and {x+d} is given by

\displaystyle \rho(x)dx \ \ \ \ \ (10)

 

The quantity {\rho (x)} is the probability density.

The generalizations for the other results are:

\displaystyle \int_{-\infty}^{+\infty}\rho(x)dx=1 \ \ \ \ \ (11)

 

\displaystyle <x>=\int_{-\infty}^{+\infty}x\rho(x)dx \ \ \ \ \ (12)

 

\displaystyle <f(x)>=\int_{-\infty}^{+\infty}f(x)\rho(x)dx \ \ \ \ \ (13)

 

\displaystyle \sigma ^2=<(\Delta x)^2>=<x^2>-<x>^2 \ \ \ \ \ (14)

 

 

The Wave Function 01

— 1. The Wave Function —

The purpose of this section is to introduce the wave function of Quantum Mechanics and explain its physical relevance and interpretation.

— 1.1. The Schroedinger Equation —

Classical Dynamics’ goal is to derive the equation of motion, {x(t)}, of a particle of mass {m}. After finding {x(t)} all other dynamical quantities of interest can be computed from {x(t)}.

Of course that the problem is how does one finds {x(t)}? In classical mechanics this problem is solved by applying Newton’s Second Axiom

\displaystyle  F=\frac{dp}{dt}

For conservative systems it is {F=-\dfrac{\partial V}{\partial x}} (previously we’ve used {U} to denote the potential energy but will now use {V} to accord to Griffith’s notation).

Hence for classical mechanics one has

\displaystyle  m\frac{d^2 x}{dt^2}=-\dfrac{\partial V}{\partial x}

as the equation that determines {x(t)} (with the help of the suitable initial conditions).

Even though Griffith’s only states the Newtonian formalism approach of Classical Dynamics we already know by Classical Physics that apart from Newtonian formalism one also has the Lagrangian formalism and Hamiltonian formalism as suitable alternatives (and most of time more appropriate alternatives) to Newtonian formalism as ways to derive the equation of motion.

As for Quantum Mechanics one has to resort the Schroedinger Equation in order to derive the equation of motion the specifies the Physical state of the particle in study.

\displaystyle   i\hbar\frac{\partial \Psi}{\partial t}=-\frac{\hbar^2}{2m}\frac{\partial^2\Psi}{\partial x^2}+V\Psi \ \ \ \ \ (1)

— 1.2. The Statistical Interpretation —

Of course now the question is how one should interpret the wave function. Firstly its name itself should sound strange. A particle is something that is localized while a wave is something that occupies an extense region of space.

According to Born the wave function of a particle is related to the probability of it occupying a region of space.

The proper relationship is {|\Psi(x,t)|^2dx} is the density probability of finding the particle between {x} and {x+dx}.

This interpretation of the wave function naturally introduces an indeterminacy to Quantum Theory, since one cannot predict with certainty the position of a particle when it is measured and only its probability.

The conundrum that now presents itself to us is: after measuring the position of a particle we know exactly where it is. But what about what happens before the act of measurement? Where was the particle before our instruments interacted with it and revealed is position to us?

These questions have three possible answers:

  1. The realist position: A realist is a physicist that believes that the particle was at the position where it was measured. If this position is true it implies that Quantum Mechanics is an incomplete theory since it can’t predict the exact position of a particle but only the probability of finding it in a given position.
  2. The orthodox position: An orthodox quantum physicist is someone that believes that the particle had no definite position before being measured and that it is the act of measurement that forces the particle to occupy a position.
  3. The agnostic position: An agnostic physicist is a physicist that thinks that he doesn’t know the answer to this question and so refuses to answer it.

Until 1964 advocating one these three positions was acceptable. But on that year John Stewart Bell proved a theorem, On the Einstein Podolsky Rosen paradox, that showed that if the particle has a definite position before the act of measurement then it makes an observable difference on the results of some experiments (in due time we’ll explain what we mean by this).

Hence the agnostic position was no longer a respectable stance to have and it was up to experiment to show if Nature was a realist or if Nature was an orthodox.

Nevertheless the disagreements of what exactly is the position of a particle when it isn’t bening measured, all three groups of physicists agreed to what would be measured immediately after the first measurement of the particle’s position. If at first one has {x} then the second measurement has to be {x} too.

In conclusion the wave function can evolve by two ways:

  1. It evolves without any kind of discontinuity (unless the potential happens to be unbounded at a point) under the Schroedinger Equation.
  2. It collapses suddenly to a single value due to the act of measurement.

The interested reader can also take a look at the following book from Bell: Speakable and Unspeakable in Quantum Mechanics