TransWikia.com

What time does the particle reach the screen in this thought experiment?

Physics Asked by BIGFATNIH on January 3, 2021

Suppose a particle with a gaussian wavefunction moves to begin with towards a position detector screen. How do we obtain the ‘Time of arrival’ distribution, when time can’t be an observable? Should the average time of arrival be inversely proportional to the mean of momentum distribution?

What does quantum mechanics say to predict the distribution of arrival times? For example, if the velocity probability distribution has a wide spread, does the arrival time also have a wide spread? This idea seems natural but doesn’t make sense because technically the particle doesn’t even have a velocity during the journey?

How do we know at what time $t$, a wavefunction will collapse. Could the wavefunction ‘pass through’ the screen without collapsing?

EDIT:

To the comments suggesting that the absence of measurement will partially collapse the wavefunction, we have to be careful in defining what this means as this is neither an evolution of state by Schrodinger equation, nor is it a classical measurement. It does not tell us anything about the location of the particle at the time of measurement.

Like suppose my wavefunction P.D is normalised to 0 outside some range of x, at a time t, I still can’t say that the particle’s location before measurement ‘was’ in this range. The probability distribution tells us nothing of the particle’s actual location history it just says what the probability of measurement is.
Otherwise, we could produce non zero probabilities for the particle jumping space instantaneously.

There’s another subtle confusion about the quantum zeno effect. If my detector lies in interval I and at time T the probability of position integrates to $delta$ inside the detector, this does not tell me that there was a probability of measuring the particle of size $delta$! It just tells me what the probability of the position being in range I would be had it been measured at time T. So the fact that the particle was not measured cannot be used in terms of probability.

5 Answers

I just finished a thesis on this subject and I'm happy to share. None of the linked papers are my own.

The time of arrival in quantum mechanics is actually a subject of ongoing research. It is certainly a question which begs for an answer, as experiments have been able to measure the distribution of arrival times for decades (see for example Fig. 3 of this 1997 paper by Kurtsiefer et. al). Note: If you do not have access to journals let me know and I will see if I can include the figure in this answer.

Part 1 of this answer describes why there is a problem with arrival time in quantum mechanics.

Part 2 outlines the modern situation in regards to this problem.

Part 3 gives, in my view, the best answers we currently have, which still need experimental verification.

1. New Ideas are Here Needed: The observable-operator formalism seems not to work for arrival times

Normally in QM you have operators $A$ corresponding to the variables used in classical mechanics. This lets you define a basis of eigenfunctions of that operator, which are found through the equation $A|arangle = a |arangle$. With such a basis in hand, the probability of finding the value $a$ in an experiment on a particle in state $|psirangle $is $|langle a|psirangle|^2$.

Though the probability distribution of arrival times can be measured in experiment, predicting it in theory is less straightforward. There are two theorems I am aware of which indicate that the textbook observable formalism above will not work for arrival times:

  1. Pauli's Theorem: In 1933, Wolfgang Pauli published a book on Quantum Mechanics called The General Principles of Wave Mechanics. In a footnote of this book, Pauli notes that if you have the commutation relation $[T,H]=ihbar$ for some supposed self-adjoint time operator $T$, then $H$ would have to have all eigenvalues $[-infty, infty]$, which is not possible because systems could not have a ground state. His is an early variant of the theorem which has since been made more precise (modern proofs can be found in section 2 of this 1981 paper).
  2. Allcock's Theorem: In 1969, Allcock gave another proof that the usual formalism won't work with time. He shows that it is impossible to have a complete set of orthonormal arrival time eigenstates which transform properly under change of coordinates $(t,vec{r}) to (t+Delta t,vec{r})$ - and thus that there cannot be an adequate self-adjoint time operator, since this would result in such eigenstates. The proof begins just before Equation 2.18 with "The reader...".

A number of authors have tried to define a time operator anyway, yet none of the variants I have seen were able to subvert both of the above theorems, rendering them unphysical.

2. Arrival time approaches outside of the textbook formalism

Because of the issues in Part 1 of this answer, many authors have tried to come up with ways to derive a distribution for the arrival time of a particle outside of the usual formalism. The distribution we seek is usually notated $Pi(t)$ and should of course have the property that

$$int_a ^b Pi(t) text{dt} = text{Probability that the particle arrives at time } t in [a,b] $$

There is no lack of proposals for this, actually the problem is that there are very many proposals which do not agree with one another. You can see a non-exhaustive summary of some of those proposals in this review paper by Muga (2000). It contains about half of the proposals I am aware of today.

Having gone through many of the existing proposals in detail, I will give my opinion: they are, for the most part, grotesquely unscientific. Problems with some these proposals (in peer-reviewed papers!) include:

  • Not normalizable even for reasonable $psi $ like gaussian wave packets
  • Predicts negative probabilities
  • Only works in 1 dimension
  • Only works when $V(x)=0$

3. The best answers we have today

In recent months, an effort has accumulated to actually do experiments to rule out many of these proposals. An experiment is planned for the near future. Until the results come out, any conclusions on which proposal is best are subject to being proven wrong. That being said, some proposals are clearly very ad-hoc and inspire little confidence, while I cannot find objective flaws in others. According to my own, always-possibly-flawed understanding after working in this field, the best proposals we have today are

3.1 Bohmian Mechanics / The Quantum Flux

Bohmian Mechanics is a quantum theory in which particles follow definite trajectories (see the double slit trajectories for example). The predictions of Bohmian Mechanics agree with standard QM for position measurements. For each individual trajectory the arrival time is the moment when it first hits the detector. Since the initial position is unknown, many different trajectories are possible, and this defines a distribution of different possible arrival times.

It has been proven that typically, the arrival time distribution in Bohmian Mechanics is exactly equal to the (integrated) flux of probability across the detector $D$:

$$Pi_{BM}(t) = int_{partial D} vec{J}(vec{r},t)cdot hat{n} text{ dA}$$

where $vec{J}$ is the flux as described in any QM textbook, and $hat{n}$ is a unit vector pointing into the detector surface. This is the rate at which probability enters the detector, and so it very nicely correlates the arrival time statistics with position statistics.

However, the quantity $vec{J}cdot hat{n}$, and therefore the entire integral, may be negative. In this case that the flux clearly does not work as a probability density, and it has been shown that it is exactly in this case (negativity for some point on the detector) that the Bohmian Mechanics prediction differs from the flux. The prediction made by Bohmian Mechanics, obtained by averaging over many trajectories, is always nonnegative. Negative flux corresponds to Bohmian Trajectories which loop around and leave the detector region.

3.2. The Kijowski Distribution

The second-most reasonable candidate I have seen is the Kijowski distribution. In this 1974 paper, Kijowski postulated it for the free particle by declaring a series of axioms. These axioms yield nicely a unique distribution, but as Kijowski notes,

Our construction is set up for free particles in both the non-relativistic and relativistic case and cannot be generalized for the non-free wave equation

Nonetheless the approach is well-liked as it yields a priori reasonable results and has a tendency to resemble the quantum flux. For this reason, Muga began calling it & its generalizations the "standard distribution".

By abandoning the axiomatic approach, a variant inspired by Kijowski's distribution has been created which works for other potentials, see paper here (2000). However there is a spacial nonlocality to this distribution, i.e. the position statistics don't correspond to the arrival time statistics. Basically it predicts that a particle can be found after a finite time at a location where, according to standard quantum mechanics, there is a 0% chance of finding it - this seems unphysical. A critique is given by Leavens in this paper (2002).

Final Remarks

Arrival time proposals are a dime a dozen at the moment, and even having done research in this field it is infeasible to rigorously go through every approach anyone has used in the literature. In addition, an experiment has not yet been done, so in some sense, science does not have an answer for you yet. To remedy this I have given what I can, namely my own understanding of the state of things after having spent a fair amount of time on the subject. If things go as I hope they do, there will be a scientific answer to this question in the coming years. In addition to the aforementioned experiment, there is for example an experimental proposal, possible to implement with modern-day technology, which could test arrival times in the most "juicy" regime: where the flux is negative. To be clear about potential biases, I know the authors of this paper. My thesis was not on the Bohmian approach.

Correct answer by doublefelix on January 3, 2021

This is similar to your last question, and I think it's answered by the answer I wrote to that one, but I'll try to explain it in a slightly different way.

The short version is that whenever a detector is switched on and actively waiting to detect something, a measurement (perhaps interaction-free) and an associated collapse takes place at every time.

In introductory quantum mechanics courses, measurements are generally treated as "complete": you measure position, for example, and the wave function is a delta function (or at least a narrow Gaussian) in position space after the collapse. No real detector works like that – it would have to fill all of space.

The simplest realistic example of a measurement device is a position detector that measures the value of the is-the-particle-here operator, which has two eigenvalues, 0 and 1, whose associated eigenstates are wave functions that are zero inside the detector and wave functions that are zero outside the detector. At each moment, if the detector detects the particle, the particle's wave function afterwards is zero outside, and at each moment, if it doesn't, the particle's wave function afterwards is zero inside. Both of these "disappearances" of part of the wave function are collapses associated with measurement. In the latter case, it's an interaction-free measurement. You will end up randomly (with probabilities dictated by the Born rule) in one of the futures where the detector measured 1 at a particular time, or in the future where it measured 0 at all times, and in each case the wave function will have "updated" to be consistent with what you know about when it was and wasn't detected.

Instead of thinking about this in the collapse picture, you can think about it in the many-worlds picture. At any given time, you can write the wave function as a weighted sum of a part where the electron is in the detector and a part where it's outside. By linearity it will be the same weighted sum of time-evolved versions of those states at any later time. The inside state evolves into a state where the environment differs from the outside state's environment in a complicated way, perhaps involving an audible click or an electrical impulse. They're different enough that there is no chance of future wavelike interference between them, so they can be treated as separate classical worlds.

Although measurements happen all the time, they don't happen continuously. There is a quantization of measurement times, associated with quantum interference in the early stages of detection, so the number of outcomes/worlds is finite. (Don't ask me for more details because I don't know them – but I'm pretty sure this is true.)

You can think of your screen as being made from a bunch of position detectors glued together, and the analysis is the same.

If the particle has zero chance of being at a detector at a given time, then no measurement or collapse happens, but it isn't necessary to treat this as a separate case – it's equivalent to the general case with the probability of one outcome being 0.


Edit in response to comments:

how is that every moment it doesn't click affects the wavefunction? the wavefunction evolves as per schrodingers equation which has nothing to do with the apparatus

The detector's failure to click tells you that the particle isn't in the detector, which is information about its location and so necessarily causes a collapse. This is called interaction-free measurement.

Possibly you're thinking that this can't be true because if the particle was being measured all the time then its behavior would become classical. The reason that doesn't happen is that failure to click usually doesn't tell you much about the particle's location, therefore the collapse doesn't change the wave function very much. If before the negative measurement the particle was spread out over a large spatial area (which includes the small detector), then after the negative measurement there's a small "hole" at the detector where the wave function is zero, while the rest of the wave function is completely unaffected (except that it's slightly rescaled to renormalize it). The small hole doesn't cause a large change in the particle's behavior.

Answered by benrg on January 3, 2021

EDIT: After some discussion, the OP made it clear that they were actually asking about a more fundamental issue: given a time-dependent probability density $p(x,t)$, and given that we are observing a fixed spatial interval, when do we expect to first observe the event?

(Only the first observation is important, because the detection of a particle is an interaction that changes its wavefunction, and so we stop wondering when we'll detect the particle once we actually do detect the particle).

Let's ask a simpler question first, that might guide our intuition. Let's roll a die. The outcomes are 1 to 6, all equally probable, and each roll of the die is a discrete time interval (let's say we roll once per second). Let's ask the question: how long will it take, on average, for us to roll a 4?

The probability of rolling a 4 on the first roll is $1/6$. The probability of rolling your first 4 on the second roll and not on the first roll is $1/6times(1-1/6)$. Likewise, the probability of rolling a 4 on the third roll but not on the first or second is $1/6times(1-1/6)^2$. And the probability of rolling a 4 on the $n$th roll but not on any previous roll is $1/6times (1-1/6)^{n-1}$. So, from our original probability distribution of outcomes per time interval, we can assemble a probability distribution of the amount of time it will take us to see a 4:

$$P(t_n)=1/6times(1-1/6)^{n-1}$$

where $t_n$ is the $n$th time interval. The mean value of $t_n$, the expected time interval in which we'll see our first 4, is:

$$bar{t}=sum_{n=1}^infty nP(t_n)=sum_{n=1}^infty ntimes 1/6times (1-1/6)^{n-1}=6$$

So we should expect it to take roughly 6 seconds to see our first 4.

With a few tweaks, we can apply that logic to our current situation. Suppose we're observing over the spatial interval $a<x<b$. First, we need to calculate the probability of observing our outcome as a function of time:

$$P(t)=int_{a}^b p(x,t) dx$$

Now, we discretize our continuous time parameter. Our detector interacts with the environment, but those interactions are not instantaneous: every interaction that would allow a detection has some associated timescale $Delta t$ (for example, detectors based on ionization would have a timescale associated with the amount of time an incoming particle takes to ionize an atom). So we can model our detector as a device that periodically "checks" to see whether it interacted with a particle. So now we have a set of discrete time intervals, $t=0, Delta t,2Delta t,...$ during which the metaphorical dice are rolled.

But this time, each time these metaphorical dice are rolled, the probability is different. And it's clear that we can't actually use the probability at a particular instant, either, because that would imply that we know what the "phase" of the detector's interactions are, which we don't. So instead, we average the probability over one interaction timescale. Let $P_n$ be the probability that a detector detects a particle in the interaction timescale interval $(nDelta t, (n+1)Delta t)$:

$$P_n=frac{1}{Delta t}int_{nDelta t}^{(n+1)Delta t} P(t)dt$$

So we can now play the same game as before: the probability that we detect a particle on the very first interaction timescale is $P_0$. The probability that we detect a particle on the second interaction timescale but not the first one is $P_1(1-P_0)$. The probability that we detect a particle on the third interaction timescale but not the second or first is $P_2(1-P_1)(1-P_0)$. And so on, generating our formula for the probability of seeing our particle on the $n$th interaction timescale:

$$P(text{detection after }ntext{ interaction timescales})=P_n(1-P_{n-1})(1-P_{n-2})...(1-P_1)(1-P_0)$$

Now that we have our distribution for arbitrary $n$, this means that the expected number of interaction timescales that we'll have to wait to detect the particle is:

$$bar{n}=sum_{n=0}^infty nP_n(1-P_{n-1})(1-P_{n-2})...(1-P_0)$$

Once we have numerically calculated $bar{n}$, then we can easily get the expected wait time before detecting a particle:

$$bar{t}=bar{n}Delta t$$


With that out of the way, let's calculate the actual probability density function.

Let's suppose that you prepare your Gaussian wavepacket in a minimum-uncertainty configuration. What I mean by that is described below.

The Heisenberg uncertainty principle states:

$$sigma_xsigma_pgeqfrac{hbar}{2}$$

It turns out that the situation where the product $sigma_xsigma_p$ is minimized is actually a Gaussian wavefunction (proofs of this can be found elsewhere on the internet), so for that particular Gaussian wavefunction, we have:

$$sigma_xsigma_p=frac{hbar}{2}$$

The momentum probability distribution is also Gaussian, with some mean $bar{p}$ and a standard deviation $sigma_p=frac{hbar}{2sigma_x}$.

So if we start with our Gaussian momentum wavefunction $psi(k)=e^{-alpha(k-k_0)^2}$, where $alpha=frac{hbar^2}{2sigma_p^2}=sigma_x^2$, we can follow this procedure to find the position wavefunction as a function of time (and then normalize said wavefunction, because the authors of that source apparently didn't bother to do so):

$$psi(x,t)=left(frac{alpha}{2pi}right)^{1/4}frac{1}{sqrt{alpha+ibeta t}}e^{i(k_0x-omega_0 t)}e^{frac{-(x-v_g t)^2}{4(alpha+ibeta t)}}$$

where $v_g=frac{domega}{dk}$ evaluated at $k_0=frac{bar{p}}{hbar}$, and $beta=frac{1}{2}frac{d^2omega}{dk^2}$, also evaluated at $k_0$.

As you can see, in order to proceed, we need a relation between $omega$ and $k$. This is called the dispersion relation, and for a relativistic electron, the dispersion relation is:

$$omega=csqrt{k^2+(m_ec/hbar)^2}$$

This means that:

$$omega_0=csqrt{k^2+(m_ec/hbar)^2}$$

$$v_g=frac{ck_0}{sqrt{k_0^2+(m_ec/hbar)^2}}$$

$$beta=frac{c}{2sqrt{k_0^2+(m_ec/hbar)^2}}-frac{ck_0^2}{2(k_0^2+(m_ec/hbar)^2)^{3/2}}$$

Then, figuring out the probability that the electron will be at the screen position $x_s$ as a function of time is as simple as evaluating $|psi(x_s,t)|^2$:

$$|psi(x_s,t)|^2=sqrt{frac{alpha}{2pi(alpha^2+beta^2t^2)}}expleft(frac{-alpha(x_s-v_gt)^2}{2(alpha^2+beta^2t^2)}right)$$


Obviously, this general solution doesn't tell us mere mortals very much in terms of intuition, so there are two special cases that are helpful to develop some understanding of the situation:

The ultra-relativistic limit

In the case where $kgg m_ec/hbar$, the dispersion relation reduces to:

$$omega=ck$$

which means:

$$omega_0=ck_0$$

$$v_g=c$$

$$beta=0$$

Plugging these into the general solution, we find that:

$$|psi(x_s,t)|^2=frac{1}{sqrt{2pi}sigma_x}expleft(-frac{(x_s-ct)^2}{2sigma_x^2}right)$$

As you can see, the wavefunction simply travels to the right at velocity $c$ over time, with a constant width $sigma_x$ as a function of time. So the uncertainty in detection time depends only on the uncertainty in initial position of the electron.

The non-relativistic limit

In the limit where $kll m_ec/hbar$, the dispersion relation reduces to:

$$omegaapprox frac{m_ec^2}{hbar}+frac{hbar k^2}{2m_e}$$

which means that:

$$hbaromega_0=m_ec^2+frac{p^2}{2m_e}$$

$$v_g=frac{hbar k_0}{m}=frac{bar{p}}{m}$$

$$beta=frac{hbar}{2m}$$

Plugging these into the original formula, we find that the center of the wavepacket travels with a velocity $v_g$, as you would expect, and that the wavepacket also spreads out quite a bit over time: the width of the wavepacket is $sqrt{alpha^2+left(frac{hbar t}{2m}right)^2}$. So the uncertainty in the detection time depends both on the initial uncertainty in position and on the distance from the mean initial position to the screen. Generally, the further away the screen is, the more uncertain the detection time will be.


With these two extremes, we can now interpolate between them to say something about what happens to a relativistic (but not ultra-relativistic) electron: increasing the distance to the screen still increases the uncertainty in detection time, but not by as much as in the non-relativistic case (which makes sense - at relativistic speeds, changing your momentum doesn't actually change your velocity very much).

Incidentally, this is why time-of-flight detectors in particle physics experiments only work well at lower energies: determining momentum by measuring velocity gets more and more difficult as energy increases.

Answered by probably_someone on January 3, 2021

The following is a failed attempt (at best with an extra assumption it can only work with cases where momentum is conserved) and too long for comment. Hopefully it illustrates the difficulty of the problem.

Let us solve in a one dimensional universe (but can be further generalised) and let the last possible time the electron can hit the detector be $T$ and the earliest possible time be $t_0$. The probability of the event at time $t_0$ the electron will be measured at $x$ is given by $p(t_0) delta t$ and that at time $t+delta t$ being $p(t_0 +delta t) delta t$ and so on. Let $U$ be the unitary operator.

Now let us make use of the density matrix formalism to specify the density matrix after it is measured at time $T$.

$$ rho = p(T ) |x rangle langle x| + p(T - delta t) U(delta t) |x rangle langle x| U^dagger(delta t) + dots$$

In the limit $delta t to 0$

$$ rho = int_{t_0}^{T} p(t )U(T-t) |x rangle langle x| U^dagger(T-t) dt$$

Let the distance between the electron gun and the screen be $a$. Now, lets slightly shift the screen away by a displacement along the x-axis by $delta a$. Then the new density matrix will be:

$$ rho + delta rho = int_{t_0 + delta t_0 }^{T + delta T} (p(t ) + delta p(t) )U(T-t) |x + delta a rangle langle x + delta a | U^dagger(T-t) dt$$

Using the translation operator and keeping the lower order terms:

$$ rho + delta rho = int_{t_0 + delta t_0 }^{T + delta T} (p(t ) + delta p(t) )U(T-t) (1 -frac{delta a cdot hat p}{hbar})|x rangle langle x |(1 +frac{delta a cdot hat p}{hbar}) U^dagger(T-t) dt$$

The above R.H.S's expansion can be expressed as the sum of the below terms:

$$ tilde rho = int_{t_0 + delta t_0 }^{T + delta T} p(t )U(T-t) |x rangle langle x| U^dagger(T-t) dt$$

$$ delta tilde A = int_{t_0 + delta t_0 }^{T + delta T} delta p(t )U(T-t) |x rangle langle x| U^dagger(T-t) dt $$

$$ delta tilde B = int_{t_0 + delta t_0 }^{T + delta T} p(t )U(T-t) (frac{delta a cdot hat p}{hbar} |x rangle langle x| - |x rangle langle x| frac{delta a cdot hat p}{hbar} )U^dagger(T-t) dt $$

Hence,

$$ rho + delta rho = tilde rho + delta tilde A + delta tilde B$$

Focusing on $ tilde rho - rho $

$$ delta tilde rho = tilde rho- rho = int_{t_0 + delta t_0 }^{t} p(t )U(T-t) |x rangle langle x| U^dagger(T-t) dt + int_{T }^{T + delta T} p(t )U(T-t) |x rangle langle x| U^dagger(T-t) dt $$

Hence,

$$ delta rho = delta tilde rho + delta tilde A + delta tilde B $$

Taking the trace:

$$ text{Tr } delta rho = text{Tr } delta tilde rho + delta tilde A + delta tilde B = 0 $$

Additional I'd be willing to bet in scenarios where momentum is conserved $[H, hat p ] =0$ then $delta T$ and $delta t_0$ increase linearly with $delta a$

Answered by More Anonymous on January 3, 2021

If we know the wave function then we also know the time of arrival in a statistical sense. Consider a laser pulse. Suppose that the electric field is a wave package travelling at speed v, say a 3D gaussian. Let's assume the spread to be constant for simplicity. The probability of a transition in the sensor is proportional to E$^2$ by Fermi's golden rule. E is known at every position at every point in time and so is probability of detecting a photon. The arrival time will be a gaussian distribution centered at d/v.

Answered by my2cts on January 3, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP