[Ray Tracing] Wavefronts & Reception Sphere

AI Thread Summary
Wavefronts in ray tracing refer to the local plane waves associated with each ray, which are perpendicular to the rays themselves. The reception sphere is a conceptual tool that determines whether a ray is received by a receiver based on its position relative to the sphere's boundary, which is defined by the angular separation and distance traveled by the ray. Only one ray should contribute to the total power received, even if it reflects back to the receiver, to avoid double counting. The double count problem arises when overlapping wavefronts from different rays are considered, complicating the accurate measurement of received power. Understanding these concepts is crucial for effectively applying ray tracing in optical systems.
whitenight541
Messages
48
Reaction score
0
Hi all,

I'm confused about the concept of wavefronts in ray tracing .. each ray is considered a wavefront? or what exactly is a wavefront in ray tracing?

In the reception sphere, it is mentioned that only one ray should be received from an actual path. I don't get it .. does this mean that if a ray is received then after some tracing the ray is reflected and reached the receiver again it shouldn't contribute again to the total power received?

Some papers also describe the double count problem. I don't understand what this problem is about .. I think it has something to do with wavefronts (which I'm confused about)

thanks in advance
 
Physics news on Phys.org
I'm a little confused by your terms- I don't know what a 'reception sphere' is.

In geometrical optics, the rays are normal to the wavefront, but the wavefront is usually not something to consider in geometrical optics. Aberrations are treated differently in ray optics vs. wave optics.
 
Each ray represents a "local" plane wave. The wave front is simply a plane wave that is normal to the ray and has an area defined by the ray tube (which expands due to dispersion as the ray travels).

I am not sure about what this reception sphere is or about how you expect a ray to contribute to the total power. If I recall correctly, no ray is used for the observeables. The rays are used to find the excited surface currents on your scatterer. Then, you take the currents and integrate them with the Dyadic Green's function to find the scattered fields. The direct field is a separate problem, which I guess you could use a "ray" to figure out as well but really you define the excitation in the beginning, this is known and so the direct field is a separate and easier problem.

I can't remember what double counting is, I read about in the documentation but I can't remember what it is.
 
The reception sphere is a technique to determine with rays are actually received by a receiver. It constructs a sphere around the receiver with radius proportional to the angular separation between rays and the total unfolded distance traveled by the ray. If the ray lies within the sphere then it is received and it contributes to the total field at that receiver.

I think I understood the double count problem:

Apart from ray tracing, we can imagine the waves emitted from the source as spherical waves increasing in size as they move away from the source. The wavefront is spherical in that case. If we divide the wavefront (at distance r) which is a sphere using hexagons, I think each of these hexagon would represent the wavefront of a ray. Each ray has a well defined non-overlapping wavefront with the neighboring rays.

If we return to the reception sphere concept, we construct the sphere about the receiver and say that the ray is received if the ray lies within that sphere. We can reverse things a little bit and say that the ray is received if the receiver lies within the wavefront of the ray. The wave front is hexagonal while the reception sphere is obviously spherical. The hexagonal shape is approximated by a sphere and that causes the double count problem (since now parts of the wavefronts overlap)

Does this make any sense? :D
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. Someone who shows interest in science is initially a welcome development. So are fresh ideas from unexpected quarters. In contrast, there is a scientific community that is meticulously organized down to the last detail, allowing little to no external influence. With the invention of social media and other sites on the internet competing for content, unprecedented opportunities have opened up for...
I am going through this course on collision detection: https://siggraphcontact.github.io/ In this link is a PDF called course notes. Scrolling down to section 1.3, called constraints. In this section it is said that we can write bilateral constraints as ##\phi(\mathbf{x}) = 0## and unilateral constraints as ##\phi(\mathbf{x}) \ge 0##. I understand that, but then it says that these constraints call also be written as: $$\mathbf{J} \mathbf{u} = 0, \mathbf{J} \mathbf{u} \ge 0,$$ where...
Back
Top