- #1
Adgorn
- 130
- 18
Hi everyone,
this is sort of a soft question which I need to ask to make sure my understanding is correct, it relates to a little project I'm doing on measurement resolution. The first question is to clear up a general concept, the second is based on the first and is the actual question.
First, when light is directed on a detector, what is seen is a "patch" with a certain shape and a fixed intensity. However, being an electromagnetic wave, the magnitude of the field oscillates between 0 and a certain amount at a really fast rate. So if we had a theoretical measuring device capable of measuring the light hitting it at a frequency of, say, a quadrillion hertz, and we direct visible light at it, will the detector show a "patch" with oscillating intensity? If so, is the fixed intensity seen normally just our puny mortal eyes seeing the average of the intensity due to a lack of precision?
Now, assuming the answer for the above is somewhat positive, it's time for my real question regarding the Rayleigh criterion. The criterion (when relating to optics) says that if 2 given light sources are too close, or to be more exact, their 2 projections are too close, it will be impossible to tell whether said projection is a result of a single light source or 2 close light sources. My question is whether that would be the case if we knew the phase of the 2 light sources at all times.
For example, say we project 2 light sources with the same frequency ##f## and amplitude through a slit so each source creates a nice interference pattern, but since the sources are closer than the Rayleigh criterion, the peaks of the 2 patterns merge into what seems like a single peak. But say we have our super-accurate measuring device, and for convenience let's also say that the phase difference between the 2 sources is exactly ##\frac \pi 2##. If the assumption of the first question is true, then when the first signal is at its peak intensity, the second will be at its minimum (which is perhaps 0), this means that the peak of the first signal will be visible and much more prominent than the second peak. ##\frac 1 {4f}## seconds later, the opposite happens, so the second peak will be visible and the first will not. Clearly in this situation one will be able to tell whether the projection is a result of 2 sources or just 1, depending whether or not the main peak changes location every ##\frac 1 {4f}## seconds. This can also work when the phase difference is pretty much anything other than 0, although maybe to a lesser extent.
So, when the phase of 2 light sources is known and the difference between the phases is not 0, is it possible to overcome the Rayleigh criterion?
this is sort of a soft question which I need to ask to make sure my understanding is correct, it relates to a little project I'm doing on measurement resolution. The first question is to clear up a general concept, the second is based on the first and is the actual question.
First, when light is directed on a detector, what is seen is a "patch" with a certain shape and a fixed intensity. However, being an electromagnetic wave, the magnitude of the field oscillates between 0 and a certain amount at a really fast rate. So if we had a theoretical measuring device capable of measuring the light hitting it at a frequency of, say, a quadrillion hertz, and we direct visible light at it, will the detector show a "patch" with oscillating intensity? If so, is the fixed intensity seen normally just our puny mortal eyes seeing the average of the intensity due to a lack of precision?
Now, assuming the answer for the above is somewhat positive, it's time for my real question regarding the Rayleigh criterion. The criterion (when relating to optics) says that if 2 given light sources are too close, or to be more exact, their 2 projections are too close, it will be impossible to tell whether said projection is a result of a single light source or 2 close light sources. My question is whether that would be the case if we knew the phase of the 2 light sources at all times.
For example, say we project 2 light sources with the same frequency ##f## and amplitude through a slit so each source creates a nice interference pattern, but since the sources are closer than the Rayleigh criterion, the peaks of the 2 patterns merge into what seems like a single peak. But say we have our super-accurate measuring device, and for convenience let's also say that the phase difference between the 2 sources is exactly ##\frac \pi 2##. If the assumption of the first question is true, then when the first signal is at its peak intensity, the second will be at its minimum (which is perhaps 0), this means that the peak of the first signal will be visible and much more prominent than the second peak. ##\frac 1 {4f}## seconds later, the opposite happens, so the second peak will be visible and the first will not. Clearly in this situation one will be able to tell whether the projection is a result of 2 sources or just 1, depending whether or not the main peak changes location every ##\frac 1 {4f}## seconds. This can also work when the phase difference is pretty much anything other than 0, although maybe to a lesser extent.
So, when the phase of 2 light sources is known and the difference between the phases is not 0, is it possible to overcome the Rayleigh criterion?