- #1
amjad-sh
- 246
- 13
In a single-slit diffraction experiment, a monochromatic light of wavelength ##\lambda## is passed through one slit of finite width ##D## and a diffraction pattern is observed on screen.
For a screen located very far away from the slit, the intensity of light ##I## observed on the screen in units of its maximum value ##{I_0}## and in terms of ##sin(\theta)##, where ##\theta## is the angle between the slit and a position on the screen, is represented in figure1 attached below.
If we let ##D=4 \times 10^{-9} m ## and ##\lambda= 1 \times 10^{-6}m##, ##I(\theta)## will have the form in figure 2.
If we immensely increase the width of the slit to become ##D=4 \times 10^{-1}m## and keep the wavelength ##\lambda## to be the same , ##I(\theta)## will have the form in figure3.
You can see that in case ##D## is very small ( ##D= 4 \times 10^{-9}m##), the intensity on the screen will be maximum along the whole screen, but when we increase the width ##D## to reach ##4 \times 10^{-1}m##, the intensity of light on the screen will just be maximum on the center of the screen, but will be null on the rest.
What is confusing me is that if we suppose that in the both cases the same amount of light is entering the slit, shouldn't the total energy on the the screen be the same for the two cases? By comparing the intensities in the two cases, it is clear that the total energy that would be detected on the screen for case ##D=4 \times 10^{-9}m## will be higher than that of the case when ##D=4 \times 10^{-1}m##. Doesn't this violate the law of conservation of energy? since the same amount of light got emanated from the slit , so we expect to detect the same amount of total energy or the same total intensities in the both cases.
Note that ##I(\theta)=I_0\Big(\dfrac{sin(\frac{\beta}{2})}{\frac{\beta}{2}}\Big)^2##, where ##\beta=\dfrac{2\pi Dsin(\theta)}{\lambda}##
For a screen located very far away from the slit, the intensity of light ##I## observed on the screen in units of its maximum value ##{I_0}## and in terms of ##sin(\theta)##, where ##\theta## is the angle between the slit and a position on the screen, is represented in figure1 attached below.
If we let ##D=4 \times 10^{-9} m ## and ##\lambda= 1 \times 10^{-6}m##, ##I(\theta)## will have the form in figure 2.
If we immensely increase the width of the slit to become ##D=4 \times 10^{-1}m## and keep the wavelength ##\lambda## to be the same , ##I(\theta)## will have the form in figure3.
You can see that in case ##D## is very small ( ##D= 4 \times 10^{-9}m##), the intensity on the screen will be maximum along the whole screen, but when we increase the width ##D## to reach ##4 \times 10^{-1}m##, the intensity of light on the screen will just be maximum on the center of the screen, but will be null on the rest.
What is confusing me is that if we suppose that in the both cases the same amount of light is entering the slit, shouldn't the total energy on the the screen be the same for the two cases? By comparing the intensities in the two cases, it is clear that the total energy that would be detected on the screen for case ##D=4 \times 10^{-9}m## will be higher than that of the case when ##D=4 \times 10^{-1}m##. Doesn't this violate the law of conservation of energy? since the same amount of light got emanated from the slit , so we expect to detect the same amount of total energy or the same total intensities in the both cases.
Note that ##I(\theta)=I_0\Big(\dfrac{sin(\frac{\beta}{2})}{\frac{\beta}{2}}\Big)^2##, where ##\beta=\dfrac{2\pi Dsin(\theta)}{\lambda}##
Last edited: