Prism Camera Lens: Expanding Visible Light Spectrum?

In summary: You need to be able to split the spectrum up and analyse it so you can extract more information than a camera can capture.You need to be able to split the spectrum up and analyse it so you can extract more information than a camera can capture.
  • #1
LightningInAJar
220
30
Is it possible to create a lens that expands a narrow range within the visible light spectrum to represent it using full visible spectrum?
 
Physics news on Phys.org
  • #2
Working on the assumption that, anything is possible.
The device would need to be part of a raster scan device that forms an image. There would need to be some form of energy gain, since energy changes with frequency.

Each point in an image would be processed and converted as it was scanned. Non-linear optics are possible that could be used to down-convert the light frequency of the observed spectrum into the IR, then to double the frequency, back into the visible light band, by using an optical squarer. The result would be to double the spectral width.
 
  • #3
I was trying to consider a way a normal camera could photograph the same scene with multiple photos and compost them to create an image with a much larger color palette. I assume file formats have fewer limits than the physical capture.
 
  • #4
What is a "normal camera" ?

You could use an RGB digital camera, then process the numbers to re-map or expand the range of values.
 
  • #5
LightningInAJar said:
Is it possible to create a lens that expands a narrow range within the visible light spectrum to represent it using full visible spectrum?

You need more than a "lens" for this. If you take a simple RGB (three colour analysis) it is possible to distort the colour space to put a colour in the colour space of the original and move it elsewhere to another colour - up to a point. Photoshop and other apps can achieve this sort of effect.

What would be your intended application? Astrophotography often uses many different filters with separate images for each filter on a wide band monochromatic sensor and the images are all stacked together for a final composite image. Sometimes the characteristic spectral lines of elements are 'inserted' as arbitrary colours in an astro image to bring out features in, say a nebula which may be very faint and show it as a false colour.
Perhaps, by saying "lens" you really mean 'filter'? In which case it's a well established technique.
 
  • #6
LightningInAJar said:
Is it possible to create a lens that expands a narrow range within the visible light spectrum to represent it using full visible spectrum?
Is there any reason you would not use software to do this?
 
  • #7
DaveC426913 said:
Is there any reason you would not use software to do this?
The reason is often Signal to Noise ratio. Distorting the colour space from just three broadband sensors can only take you so far. Eventually you need well defined narrow band filters, particularly for quantative results.
 
  • #8
sophiecentaur said:
The reason is often Signal to Noise ratio. Distorting the colour space from just three broadband sensors can only take you so far. Eventually you need well defined narrow band filters, particularly for quantative rsults.
gep, that might be a good reason. If the OP cares to elaborate on his project, we might even find out
 
  • #9
I didn't mean filter as I wasn't trying to block any color information out. I basically want to represent segments of the visible spectrum separately in each photo as to collect more information than perhaps any given camera can capture, but in order for it to capture anything it would need to see the fullest range from the normal spectrum. Basically I want to stretch out the subtlies that a camera might not capture otherwise by using something in front of the camera that hypothetically can stretch out the spectrum like a prism. Afterwards I could maybe use a file format that allows for more color if it exists, or maybe try making a video that flips through the images very quickly to show the highest color definition possible. This is more of any art project idea, but I am always curious of ways to get more information with my camera.
 
  • #10
Cameras are like us, they only see in three colours, and those colour bands are slightly different to the spectral bands that humans see.

You must therefore analyse the colour spectrum of each pixel independently, with something like a prism or grating, before it is recorded.

That will require a raster scan and analysis of the pixels, with a recording of each spectrum observed.
 
  • #11
LightningInAJar said:
I didn't mean filter as I wasn't trying to block any color information out. I basically want to represent segments of the visible spectrum
Actually, if you want to analyse virtually anything you need to split it up in some way. This can involve crushing a rock to find the different minerals or even parsing a sentence in an English lesson.

In order to examine 'those segments' you need to be able to identify them. Colour TV and photography (even with film) and our eyes uses selection (i.e. filtering) to analyse the colour of an object. Our eyes are NOT spectrometers. They use just three basic bands of colour ('filters') and do the best they can to analyse / classify subjective colours with that. This link discusses the basics of colour perception. If you search around, you will probably find there are many more articles on the artistic aspects of colour and composition of pictures than the nuts and bolts of colorimetry.

Our colour vision makes do with basically three different filters which examine the relative levels of reds, greens and blues in the light arriving at the retina. (a very simple three colour analysis Here's another link.). I notice your mention of the word "prism" in your thread title. That is, indeed a form of filtering which diverts different wavelengths in different directions. A lens that's badly made can have this effect at edges between two parts of an object (colour aberration) and that's a danged nuisance.

Your idea, as you state it, has one huge problem because light frequency remains the same over its whole path. Without measuring, analysing and quantifying, you can't show what a picture will look like if the frequencies are shifted about.

My mention of extra filters for obtaining more colour information in a picture has the message that you can introduce 'false colour' into a picture. For astrophotography you absolutely have to use selective filtering of the wavelengths from a certain object because it is just too faint and swamped out by other brighter objects.
 
  • #12
sophiecentaur said:
Actually, if you want to analyse virtually anything you need to split it up in some way. This can involve crushing a rock to find the different minerals or even parsing a sentence in an English lesson.

In order to examine 'those segments' you need to be able to identify them. Colour TV and photography (even with film) and our eyes uses selection (i.e. filtering) to analyse the colour of an object. Our eyes are NOT spectrometers. They use just three basic bands of colour ('filters') and do the best they can to analyse / classify subjective colours with that. This link discusses the basics of colour perception. If you search around, you will probably find there are many more articles on the artistic aspects of colour and composition of pictures than the nuts and bolts of colorimetry.

Our colour vision makes do with basically three different filters which examine the relative levels of reds, greens and blues in the light arriving at the retina. (a very simple three colour analysis Here's another link.). I notice your mention of the word "prism" in your thread title. That is, indeed a form of filtering which diverts different wavelengths in different directions. A lens that's badly made can have this effect at edges between two parts of an object (colour aberration) and that's a danged nuisance.

Your idea, as you state it, has one huge problem because light frequency remains the same over its whole path. Without measuring, analysing and quantifying, you can't show what a picture will look like if the frequencies are shifted about.

My mention of extra filters for obtaining more colour information in a picture has the message that you can introduce 'false colour' into a picture. For astrophotography you absolutely have to use selective filtering of the wavelengths from a certain object because it is just too faint and swamped out by other brighter objects.
I just assume the human eye can only deal with so many colors at a time. I figured maybe dividing the visible spectrum into 10 portions, represent those wavelengths using the full visible spectrum (only initially for representation), then give appropriate representation after images are composited.

Human eyes can tell between 8 to 10 million colors, but screens I think can represent billions? Maybe adding time and flipping between images can produce an interesting effect?

Strangely a mantis shrimp has like 20 cone cells yet human color vision is better. I take that to mean processing of the information is a major factor.
 
  • #13
LightningInAJar said:
Human eyes can tell between 8 to 10 million colors, but screens I think can represent billions?
Most monitors have 256 levels of brightness per RGB channel. 2563 = 16 million colours.

LightningInAJar said:
I just assume the human eye can only deal with so many colors at a time.
Humans are pretty good at interpreting full colour images. It's kind of our thing.Now, it's true that, if we want to examine specific sets of data individually, we might want to break those out:
Hydrogen-Alpha (left), Oxygen-III (middle) and Sulphur-II (right):
1684814314879.png

https://www.lightvortexastronomy.com/tutorial-narrowband-hubble-palette.html
LightningInAJar said:
Strangely a mantis shrimp has like 20 cone cells yet human color vision is better.
Depends what you mean by "better". Mantids have a broader range of frequency sensitivity (and polarity), but I suspect humans have a greater density of receptors at the fovea than mantids - meaning higher rez.

Interestingly, apparently "mantis shrimp struggle to tell the difference between color shades that human eyes easily discern."
https://www.washingtonpost.com/news...re-has-the-oddest-eyes-in-the-animal-kingdom/
Some interesting factoids there about mantid vision there.
 
Last edited:
  • #14
LightningInAJar said:
Is it possible to create a lens that expands a narrow range within the visible light spectrum to represent it using full visible spectrum?
Spectroscopy, or spectrophotometry ?
 
  • #15
LightningInAJar said:
I just assume the human eye can only deal with so many colors at a time.
Best not make 'assumptions' about these things. Scientists took hundreds of years to arrive at valid models of colour vision. Nowadays the model has become pretty well established although there are some alternative views which attempt to explain the psychology of vision in detail and why we often experience things differently under different conditions. We actually deal with a limited number of experiences at at any one time, whether it's colour, sound, taste etc. because we only have a certain available processing power. Avoid going down there until you know an awful lot more about the basics.

Perhaps you should get better acquainted with the actual facts about colour vision and imaging. It seems to me that you are trying to bend what people have written here to fit your personal ideas. That tends to lead nowhere. Did you actually read those two links I gave you?

Google topics like colour vision and ways of colour mixing. You will need to distinguish between additive and subtractive mixing, of course and steer clear of discussions of composition and aesthetics. They have their places, of course but not when you are discussing the nuts and bolts of things.
 
  • #16
LightningInAJar said:
I was trying to consider a way a normal camera could photograph the same scene with multiple photos and compost them to create an image with a much larger color palette. I assume file formats have fewer limits than the physical capture.
Spectrophotometer:
https://en.wikipedia.org/wiki/Imaging_spectrometer
 
  • #18
LightningInAJar said:
I figured maybe dividing the visible spectrum into 10 portions, represent those wavelengths using the full visible spectrum (only initially for representation), then give appropriate representation after images are composited.
I guess you have moved on from the 'lens' idea (that's good). Yes, ten filters can produce ten data sets which could provide more information for picture processing. The human analysis curves yield a particular type of perception. The 'colours' we see are not all spectra' lines or bands. There are those which are produced by mixes of reds and blues but a low level of greens. We see them as colours that are 'on the other side for white'. Colour TV can represent these well. There could be a problem with using more analysis bands because the light energy passing through each each filter would be less and the total amount admitted into the lens would be shared by the ten sensors (or used sequentially, which would also reduce the signal to noise ratio for each sensor).
Bottom line is that different analyses can be (and are being) used for specialist applications but 'colour fidelity' of regular TV may not benefit too much because the analysis wouldn't match that of the eye.
 
  • #19
sophiecentaur said:
Best not make 'assumptions' about these things. Scientists took hundreds of years to arrive at valid models of colour vision. Nowadays the model has become pretty well established although there are some alternative views which attempt to explain the psychology of vision in detail and why we often experience things differently under different conditions. We actually deal with a limited number of experiences at at any one time, whether it's colour, sound, taste etc. because we only have a certain available processing power. Avoid going down there until you know an awful lot more about the basics.

Perhaps you should get better acquainted with the actual facts about colour vision and imaging. It seems to me that you are trying to bend what people have written here to fit your personal ideas. That tends to lead nowhere. Did you actually read those two links I gave you?

Google topics like colour vision and ways of colour mixing. You will need to distinguish between additive and subtractive mixing, of course and steer clear of discussions of composition and aesthetics. They have their places, of course but not when you are discussing the nuts and bolts of things.
I have read color blind people can identify people wearing camouflage better than tricolor visioned people which also emphasizes processing and how too much information at one time might always be bad.
 
  • #20
LightningInAJar said:
I have read color blind people can identify people wearing camouflage better than tricolor visioned people which also emphasizes processing and how too much information at one time might always be bad.
True.
And for further reading, have a look at the other end of the human vision spectrum: tetrachromats.
 
  • #21
LightningInAJar said:
which also emphasizes processing and how too much information at one time might always be bad.
It just shows that evolution 'makes choices' between use of resources (energy etc.) and overall benefit. The tristimulus system has proved beneficial in more circumstances than a 'bistimulus' system might have done. Likewise, a 'polystimulus' system would involve more cost without a corresponding advantage.
If there had been a vast benefit (for humans) in an extended range into the infra red then we might have a tetrastimulus arrangement.
Your use of the word "bad" is not really appropriate; we don't often meet with Ishihara colourblindness tests in every day life and (to be fair) cammo patterns are specially designed to fool us.
 
  • #22
sophiecentaur said:
... we might have a tetrastimulus arrangement.
We do. There are documented cases of tetrachromats. These people (mostly women) have a fourth receptor that allows them to make far more subtle distinctions in the greens than the rest of us.
 
  • #23
DaveC426913 said:
We do. There are documented cases of tetrachromats. These people (mostly women) have a fourth receptor that allows them to make far more subtle distinctions in the greens than the rest of us.
That doesn't surprise me too much. Do you have a reference?
I wonder what they can't do, to make up for this can do?
 
  • #24
DaveC426913 said:
True.
And for further reading, have a look at the other end of the human vision spectrum: tetrachromats.
Oh I have. Rare. Still not sure if they do in fact see more colors or not. I know some find it distracting. I assume maybe half of the "green" cones are closer to being "yellow" cones?

Do you know if there are any animals that don't contrast colors as we do? I have heard there are ways to trick our brain to see red/green or blue/yellow but I haven't managed it.
 
  • #25
LightningInAJar said:
Oh I have. Rare. Still not sure if they do in fact see more colors or not. I know some find it distracting. I assume maybe half of the "green" cones are closer to being "yellow" cones?
There was a TV show that interviewed a woman suspected of being a tetrachromat. She said people were always wearing outfits that didn't match - usually green. They thought the top and bottom were the same colour, but she saw them as two obviously distinct shades of green.

LightningInAJar said:
Do you know if there are any animals that don't contrast colors as we do?
Well, dogs have a different perception than us.
1684986735797.png
 
  • Like
Likes sophiecentaur
  • #26
LightningInAJar said:
I assume maybe half of the "green" cones are closer to being "yellow" cones?
You're making another "assumption" about what is going on. There are no special "yellow" or "green" (or Pink, turquoise or purple) cones. There are just three types of cone; some can detect a broad range of what we would call Reds, others detect Greens and others detect Blues. If you look at something and it appear to be Yellow, your Green receptors are getting some and the Red receptors are getting some. Your brain takes the combination of R and G and calls it some shade of yellow. Moreover, there are a huge range of mixes of wavelengths that can give the same sensation of 'colour'. It is totally subjective and no two people will agree precisely what colour to call something or to match it. Tetrastimulus vision is a complete red herring in the context of the basic model; it neither proves nor disproves anything. You must be prepared to do some reading about this and not make up your own stuff. If it were like you imagine, colour cameras just wouldn't work because they really really do only have three sets of sensors.
 
  • #27
DaveC426913 said:
There was a TV show that interviewed a woman suspected of being a tetrachromat. She said people were always wearing outfits that didn't match - usually green. They thought the top and bottom were the same colour, but she saw them as two obviously distinct shades of green.Well, dogs have a different perception than us.
View attachment 327028

DaveC426913 said:
There was a TV show that interviewed a woman suspected of being a tetrachromat. She said people were always wearing outfits that didn't match - usually green. They thought the top and bottom were the same colour, but she saw them as two obviously distinct shades of green.Well, dogs have a different perception than us.
View attachment 327028
I mean that we can't see red-green because our brain or optic system won't give us a perception for it.

https://www.scientificamerican.com/...en are called,reddish green or yellowish blue.

Know if any animals lack that inhibition to color mixing?
 
  • #28
LightningInAJar said:
I mean that we can't see red-green because our brain or optic system won't give us a perception for it.
Could you make that statement clearer, please? It's certainly one or the other of those options.

Inability to distinguish between reds and greens means could be hardware or software based. In a camera, for instance, you could disconnect a red or green channel OR you could connect them in parallel so they each get the same signal. Otoh, you could achieve the same in Photoshop with the software controls. Where does that take us?

If you now want to discuss red-green colourblindness then I recommend that you first try to understand the mechanism of normal, functioning colour perception. There is a common method for coping with stuff that is initially too hard and that is to shift the goalposts and deflect the argument to something else. It seldom actually helps.
 

FAQ: Prism Camera Lens: Expanding Visible Light Spectrum?

```html

What is a prism camera lens?

A prism camera lens is a specialized optical component that uses prisms to refract and disperse light into its constituent colors, effectively expanding the visible light spectrum captured by the camera. This can enhance the detail and color accuracy of the images taken.

How does a prism camera lens work?

A prism camera lens works by passing incoming light through a prism or a series of prisms. The prisms bend (refract) the light at different angles based on its wavelength, spreading it out into a spectrum. This process, known as dispersion, allows the camera to capture a broader range of colors and finer details.

What are the benefits of using a prism camera lens?

The primary benefits of using a prism camera lens include enhanced color accuracy, improved detail resolution, and the ability to capture a wider range of wavelengths. This makes them particularly useful in scientific imaging, art photography, and any application where color fidelity and fine detail are crucial.

Are there any drawbacks to using a prism camera lens?

Some potential drawbacks of using a prism camera lens include increased complexity and cost, potential for light loss due to multiple refractions, and the need for precise alignment and calibration. Additionally, the size and weight of the lens system may be greater than standard lenses.

Can a prism camera lens be used with any camera?

While a prism camera lens can theoretically be used with many types of cameras, practical use may require specific mounts, adapters, or modifications to the camera body. Compatibility depends on the design of both the lens and the camera, and some cameras may not fully benefit from the expanded spectrum capabilities without additional software or hardware adjustments.

```

Similar threads

Replies
10
Views
4K
5
Replies
172
Views
5K
Replies
10
Views
3K
Replies
7
Views
2K
Replies
8
Views
4K
Replies
3
Views
2K
Replies
1
Views
2K
Back
Top