Image Reconstruction:Phase vs. Magnitude

In summary, the combination of the magnitude and phase spectra contain the same information as the original complex image, with the magnitude representing the intensity values of low frequency pixels and the phase representing the intensity values of high frequency pixels. In figure 1.(c), the reconstruction from the magnitude spectrum only results in a loss of contrast due to a high pass filter effect, while in figure 1.(d), the reconstruction from the phase spectrum only results in edges and lines being emphasized due to the preservation of their relative phases. This is due to our subjective perception of edges and outlines, which our vision system is constantly searching for. Overall, the combination of the magnitude and phase spectra provides a complete representation of the original image.
  • #1
ramdas
79
0
Figure 1.(c) shows the Test image reconstructed from MAGNITUDE spectrum only. We can say that the intensity values of LOW frequency pixels are comparatively more than HIGH frequency pixels.

Figure 1.(d) shows the Test image reconstructed from PHASE spectrum only. We can say that intensity values of HIGH frequency (edges,lines) pixels are comparatively more than LOW frequency pixels.

Why this magical contradiction of intensity change (or exchange) is present between Test image reconstructed from MAGNITUDE spectrum only and Test image reconstructed from PHASE spectrum only, which when combined together form the original Test image?
 

Attachments

  • xx.PNG
    xx.PNG
    71 KB · Views: 645
Engineering news on Phys.org
  • #2
The magnitude and the phase contain different information from each other, and together they contain the same information as in the original complex image. So if the original image has information A,B,C, and D and the magnitude spectrum has A and D then B and C must be in the phase spectrum.
 
  • #3
@Mentor sir but can u tell me in some detail what is happening in figure 1.(c) and 1.(d)actually?
 
  • #4
If you want to understand what is happening in 1c, to account for the lack of any apparent image, what is happening is that it is the result of the sum of a whole set of spatial harmonics that are only 'in phase' at point (0,0) and produce a massive maximum there and nearby. (Their phases are all set to zero)
In 1d, you are starting with a whole lot of spatial harmonics with the same amplitude - producing a more or less uniform brightness over the image. But there are certain places on the original scene (edges) where the relative phases go to produce a sum with big discontinuities. Even when the amplitudes of the harmonics are kept the same, this still gives identifiable and abrupt changes in resultant amplitude in the same places as the edges in the original.

It's interesting to note that the signal analysis done in the eye is very sensitive to phase distortion of a signal (the phase is what we are looking at because that tells you where the edges are) whereas the ear is much more sensitive to amplitude frequency distortion (you can hear speech, for instance when the audio signal has been subjected to all sorts of phase distortions in audio compression systems).
 
Last edited:
  • #5
Image d can be understood as a filter. Since the phase is preserved and the magnitude is set to 1 this image is the same as the original image with a filter which is inversely proportional to the k-space magnitude. Since the k-space magnitude is high in the center and low on the edges, this amounts to a high pass filter. Visually, you see the high pass filtering also due to the preservation of edges and the loss of contrast.

I don't know a simple way to understand image c. EDIT: just noticed sophiecentaur's approach for understanding image c, which seems good to me.
 
Last edited:
  • #6
Whoops. Where did that post go? I was all ready to have a go at answering it.

I am not sure what was meant by low and high frequency pixels. A pixel is the same width over all the picture. I think you could use the term low and high frequency spatial variation.
 
  • #7
question edited.added euations in previous one...

Figure 1.(c) shows the Test image reconstructed from MAGNITUDE spectrum only. We can say that the intensity values of LOW frequency pixels are comparatively more than HIGH frequency pixels. f(x,y) is image function and F(u,v) is its 2D Fourier transform


f(x,y)= ∑_(u=0)^(U-1)∑(v=0)^(V-1)\ |F(u,v)| exp^(1j*2Π*xu)/M) * e^(1j*2Π*(vy)/N) --(1)

Figure 1.(d) shows the Test image reconstructed from PHASE spectrum only. We can say that intensity values of HIGH frequency (edges,lines) pixels are comparatively more than LOW frequency pixels.

f(x,y)= ∑_(u=0)^(U-1)∑(v=0)^(V-1)\ exp(j*angle(u,v)) *exp^(1j*2Π*xu)/M) * e^(1j*2Π*(vy)/N)--(2)

i want to ask that in phase only reconstruction part why do i get only edges or lines,why not low frequency components?? because from the 2nd equation i am not getting any idea that only edge like features are emphasized...
 
  • #8
Sir i have added the post again.it was deleted yesterday by mistake...

sophiecentaur said:
Whoops. Where did that post go? I was all ready to have a go at answering it.

I am not sure what was meant by low and high frequency pixels. A pixel is the same width over all the picture. I think you could use the term low and high frequency spatial variation.

Sir i have added the post again.it was deleted yesterday by mistake...
 
  • #9
To account for the visibility of edges, you need to realize that, in the frequency domain, the high and low frequency components are not only there at the edges; they are everywhere (that's what Fourier is all about. The reason that you 'see' a step or impulse is that they happen to add in those places to produce a visible (but very slight) change in brightness. Over most of the picture, the frequency components add up to produce an 'average' brightness with no visible change - hence the mid-grey appearance.
You need to take into account our subjective appreciation of a scene as well as the Maths involved.
Imo, the reason that it works for our eyes is that our vision system is constantly searching for edges and outlines. Out in the wild, it's the best way to recognise food and threat. It's the outline of an elephant, against a grey wall that allows us to spot it and not the slight change in greyness.
Likewise, we are good at spotting small amounts of rapid movement but we can ignore the variation in light levels as a cloud goes over the sun etc.

Having written all this, I still have to agree with you that the results you showed are not what you'd expect intuitively. When I was shown the effect, years ago, I was just as confused as you have been!
 

FAQ: Image Reconstruction:Phase vs. Magnitude

What is the difference between phase and magnitude in image reconstruction?

The phase of an image refers to the relative position of the brightness or color of each pixel, while the magnitude refers to the amplitude or intensity of the pixels. In image reconstruction, phase and magnitude are two different components of the image that are often manipulated separately to enhance or modify the image.

Which is more important in image reconstruction - phase or magnitude?

Both phase and magnitude are equally important in image reconstruction. The phase provides important spatial information and contributes to the overall structure of the image, while the magnitude determines the contrast and overall brightness. Both components must be considered and balanced in order to achieve a high-quality image reconstruction.

How does phase and magnitude affect image resolution?

The phase of an image can affect the resolution by causing blurring or distortion if it is not properly corrected. On the other hand, the magnitude can affect the contrast and sharpness of the image, which can also impact the perceived resolution. Therefore, both phase and magnitude must be accurately reconstructed in order to achieve the best possible resolution.

Can phase and magnitude be manipulated independently in image reconstruction?

Yes, phase and magnitude can be manipulated independently in image reconstruction. This allows for the enhancement or modification of specific features in the image, such as sharpening edges or adjusting contrast. However, it is important to maintain a balance between the two components to avoid introducing artifacts or distortion in the final image.

What techniques are commonly used for phase and magnitude reconstruction in imaging?

There are several techniques used for phase and magnitude reconstruction in imaging, including Fourier transform, phase retrieval, and maximum-likelihood estimation. Each technique has its own advantages and limitations, and the choice of technique depends on the specific application and the quality of the acquired data.

Back
Top