TransWikia.com

Why is astronomy intensity interferometry immune to atmospheric turbulence?

Physics Asked on January 22, 2021

I have read there is renewed interest in intensity interferometry in astronomy. I read that intensity interferometry is immune to atmospheric turbulence, which plagues astronomy and regular (amplitude) interferometry. I don’t know why this is. I read that intensity interferometry is based on the intensity of the incoming wavefront (as measured at two or more telescopes), not on amplitude and phase as in regular interferometry. I don’t see why atmospheric turbulence does NOT cause random intensity fluctuations, therefore plagueing intensity interferometry like it plagues the rest of astronomy.

2 Answers

Amplitude interferometry attempts to bring together the signals from multiple telescopes and combine them to form an interference pattern. Because of limited source coherence and the current impossibility of recording the phase and amplitude of high frequency optical light, high precision (sub-wavelength) and rapidly moving delay lines are needed to combine the signals in real time and compensate for the changing geometric delay between the signals. The phase relationships are badly affected by the atmosphere unless individual "observations" are kept shorter than the timescales of atmospheric turbulence-induced phase variations of tens of ms.

The constraints on intensity interferometry are less stringent. The arrival times of photons can be recorded with nanosecond precision, which is much longer than any timescale of perturbation by the atmosphere, and then the data can be correlated offline. The signal being investigated does not depend on the phase difference between the detectors and will be found so long as there is a degree of coherence between the two signals (Twiss 2010). No phase information is needed so the delay between the telescopes just needs to be known to better than how far light travels in the time resolution interval i.e. a few cm. The technique is therefore immune to phase variations caused by "seeing".

To be more concrete. Gross atmospheric turbulence on timescale of a few $times 10$ ms can change the path difference between the detector by a significant fraction of a wavelength, thus destroying the interference fringes. The same path difference variations will change the arrival times of photons by just $sim lambda/c = 10^{-15}$ s.

The actual lags that are currently searched for in inensity interferometry are at the scale of a few ns. e.g. For an interferometer with a baseline of 100m, then two sources separated by 10 microarcsec introduce a delay of 5 ns between the signals received at the two receivers. This is many orders of magnitude larger than fluctuations caused by the atmospheric turbulence.

Answered by ProfRob on January 22, 2021

Seeing delays the wavefronts only by a few micrometers.[1] This is quite severe for direct imaging, because it acts like a phase mask distorting the image. In other words you would need the path lengths of all parts of the mirror to have an optical path length to the source which is significantly more stable than the optical wavelength.
The same is true for amplitude interferometry, just that instead of the parts of one mirror it is the individual telescopes requiring a fixed relative optical path length.

In intensity interferometry you measure the intensity correlation, also known as $g^{(2)}$. As in the original paper by Hanbury-Brown and Twiss one would measure the spatial intensity correlation function $g^{(2)}(Delta vec{r})$. Via the Siegert relation one can extract $g^{(1)}(Delta vec{r})$ from it, which via the Van Cittert-Zernike theorem lets you calculate back the intensity distribution at the source.
It is crucial that the intensity fluctuations at the detectors are compared within their coherence time $tau_c$, otherwise they would appear to be random. The coherence time is given by the Wiener-Khinchin theorem from the spectral bandwidth $Delta nu$ of the detected light. As a rule of thumb it is $tau_c approx 1 / Delta nu$. Stars usually emit in a large bandwidth of several hundred terahertz, which in turn leads to a coherence time on the order of femtoseconds. In this case the few micrometers mentioned in the beginning would have a noticeable effect, because it's on the same order of magnitude as the distance light travels within a few femtoseconds.
But here comes the trick: Since state-of the art detectors are anyway not fast enough to resolve intensity fluctuations on femtosecond time scales it is necessary to filter the bandwidth of the light down to a few gigahertz (coherence time few hundred picoseconds). Within the coherence time of the filtered light it travels $approx 10 , text{cm}$, much more than the random delay cause by seeing.
On the other hand of course it is a disadvantage that due to the filtering a large fraction of the light is discarded. Therefore shot-noise is higher.

[1] Unfortunately, I didn't find this on the English Wikipedia article, but the German one says it.

Answered by A. P. on January 22, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP