Level of details in prime focus vs eyepiece images

In summary: You will have poorer results if you are not stacking images - I suggest looking at a youtube video or two on how to take a short movie of the sun and stack the frames within it - there are multiple free tools for the job. The final thing in your list is really the matching of the chip pixel size to the image size but you need to look at the other bits first as they are the most significant and you should be able to get some decent results.
  • #1
PhysicoRaj
Gold Member
538
49
TL;DR Summary
I observed a significant difference in details in a digitally magnified prime focus image compared to an optically magnified eyepiece image, for the same objective lens. Why?
I hope this is the right place to ask this.

I was clicking the Sun a few days ago with my beginner scope and a DSLR. The scope is a 60mm aperture f/12 refractor and the DSLR is a Canon SL3 (APS-C 6000x4000). First I used a 20mm eyepiece (35x) to view the sun, saw it in all its glory, the sunspots and the region around the spots that appear like hair or filaments (not sure what exactly they are called). Then I shot some images with my DSLR at prime focus. I realized that the apparent size of the Sun's disc in the image was smaller at prime focus which is because the eyepiece provided magnification. But to my eye viewing both the eyepiece image and the image on the DSLR LCD screen, the difference was not too much, maybe the eyepiece image was 1.5 - 1.75x times the image on the DSLR screen.

But what I did not expect was the loss in detail. The eyepiece image was much more detailed and sharp compared to the prime focus image recorded on the sensor. I could see the spots, but those hair-like / filament-ish outer regions of the spots now look like the sunspots itself. Unfortunately I cannot 'show' how I 'saw' through the eyepiece. I don't have an eyepiece projection mechanism set up.

Does the amount of detail in the image depend on the eyepiece magnification or is it only a function of aperture and exposure? If I understand correctly, the objective lens sets the resolution and resolution sets the amount of useful magnification. But the details / information is still present in the primary image? So I do not understand why the same primary image, when magnified optically with an eyepiece yields more details but when magnified digitally has significantly less details.

I have a few explanations to my observation:
  1. I was not clicking RAW. Can JPEG compression remove so much detail?
  2. Since the prime focus image is smaller, it was harder to focus to a crisp as compared to eyepiece, where I would focus until I saw those details around the sunspots.
  3. The prime focus image is not magnified enough to bring out the details (The DSLR sensor pixels are not 'seeing' resolved details)
  4. All of the above to certain degrees.
In the case that reason #3 is the best explanation, does that mean that the amount of details in the image is always limited by the objective lens and sensor pixel density when doing prime focus (because I am stuck with no optical magnification to bring out details)?

Thanks for any advice / help!

Edit 1:
A fellow astrophotographer online told me that single exposures are prone to atmospheric (seeing) disturbance and absence of tracking causes motion blur that smears out the details while viewing live with the eyes let's the brain do some filtering and we see a persistent detail which cannot be caught in an exposure (like lucky imaging). This makes sense to me, but I still think I saw a significant difference in details and would appreciate if I get your opinions on my points. Thanks.

Edit 2:
After fiddling with the settings and options in AutoStakkert!3, I managed to get a better picture after stacking ~300 frames from a recorded MP4 video (image attached). This is the best image I have so far! But it is still not as detailed as the live eyepiece view.
 

Attachments

  • ap14100.png
    ap14100.png
    36.4 KB · Views: 236
Last edited:
Astronomy news on Phys.org
  • #2
All of the above. What are you using as a solar filter?
 
  • #3
AndyG said:
All of the above. What are you using as a solar filter?
Mylar white light solar filter that goes on the aperture.

If you think its all of them (including the reason given in the edit), do you have a stronger bias for anyone or two of them? (Because I could try working on them).
 
  • #4
The mylar white light should be ok.
Focus is the hardest thing and is the first to address.
Exposure is next - you will easily blow out detail with overexposure - manual settings will be required for the dark details you are trying to pick out.
You will have poorer results if you are not stacking images - I suggest looking at a youtube video or two on how to take a short movie of the sun and stack the frames within it - there are multiple free tools for the job. The final thing in your list is really the matching of the chip pixel size to the image size but you need to look at the other bits first as they are the most significant and you should be able to get some decent results. Dave Eagle published a short pamphlet on solar imaging and processing which you may be able to find and it is very good.
 
  • Like
Likes PhysicoRaj and FactChecker
  • #5
AndyG said:
The mylar white light should be ok.
Focus is the hardest thing and is the first to address.
Exposure is next - you will easily blow out detail with overexposure - manual settings will be required for the dark details you are trying to pick out.
You will have poorer results if you are not stacking images - I suggest looking at a youtube video or two on how to take a short movie of the sun and stack the frames within it - there are multiple free tools for the job. The final thing in your list is really the matching of the chip pixel size to the image size but you need to look at the other bits first as they are the most significant and you should be able to get some decent results. Dave Eagle published a short pamphlet on solar imaging and processing which you may be able to find and it is very good.
Yes, focus is something I need to work on immediately. Right now I am counting on the amount of surface details, sharpness of the edge of the disc, contrast etc to know when I'm in perfect focus. But it might not be enough. The DSLR is a pretty heavy alternative to an eyepiece or Barlow that sits in the Focus tube and it could prevent achieving perfect focus even due to the slightest shake or vibration.

Coming to exposure. I shoot in full manual, and I try to set an optimum exposure where the image is bright enough and has most details. But I do this by looking at the live-view through the LCD screen. I am not sure if the LCD screen can exactly replicate what's seen on the sensor.

I have tried stacking frames from a movie. My camera unfortunately does not store uncompressed movies. I managed to stack the compressed frames on Autostakkert 3 and got a final image worse than my single expsoure eyepiece view. This might be due to compressed frames or something else I have to figure out. If compression is really the issue, I hope I can use the HDMI out to record Raw video externally and stack the frames.

Matching chip pixel size to the image size - are you talking about matching the sensor resolution to the optical resolution available from the primary lens? The sensor is an APS-C (crop) 6000x4000. I think it is 4 microns per pixel, need to confirm that. Scope is f/12 with a 1.25" focus tube.

Edit: After fiddling with the settings and options in AS!3, I managed to get a better picture (attached in #1). This is the best image I have so far! But it is still not as detailed as live eyepiece view. Maybe now compression makes the difference.

Thanks.
 
Last edited:
  • #6
PhysicoRaj said:
Barlow that sits in the Focus tube and it could prevent achieving perfect focus
If you can't reach focus then perhaps you could use a short extension tube. If you want to find the length of extension you could see how far the camera needs to be withdrawn to go through the sharp focus position. Good focus is not essential - just going through the best focus position will let you select a suitable extension tube.

Sometimes a star diagonal will give you a workable focus position. It may depend on how much range your focuser has. Just experiment.
 
  • #7
sophiecentaur said:
If you can't reach focus then perhaps you could use a short extension tube. If you want to find the length of extension you could see how far the camera needs to be withdrawn to go through the sharp focus position.
I can reach focus, I meant to say drifting out of focus due to the weight of the DSLR pulling on the focusing rack.

sophiecentaur said:
Sometimes a star diagonal will give you a workable focus position. It may depend on how much range your focuser has. Just experiment.
I found that a star diagonal made it worse, so I stopped using it.
 
  • #8
PhysicoRaj said:
the weight of the DSLR pulling on the focusing rack.
I getcha now. Is there a screw to stop the barrel slipping? A beefy DSLR is quite an ask for a focuser that's of a reasonable price. It would be even worse if you were taking pictures around the Zenith.
The star diagonal is a bit of red herring of mine. The mirror can only damage the image.
Re-reading your post, you seem to be comparing a visual in the EP with what you see in your camera display. Is the quality of the recorded image just as bad, viewed indoors on your monitor. Thing is, there are so many variables in this and any discussion can take us up many blind alleys until the 'cause' is found.
For focus, live view can be a bit limiting unless you can magnify the viewer image. I focus my Pentax on maximum magnification and the results (the very few I have) are fine.
PS have you made a similar comparison with night time images?
PPS Could it just be camera shake? A remote release or, even better, a delay after the mirror settles down can improve fuzziness.
 
  • #9
sophiecentaur said:
I getcha now. Is there a screw to stop the barrel slipping? A beefy DSLR is quite an ask for a focuser that's of a reasonable price. It would be even worse if you were taking pictures around the Zenith.
No stopper screws. I usually capture zenith, because less seeing issues. That means more focusing effort against gravity. I am thinking of adding a layer of friction material on the focus rack teeth, to retard the motion. Now if I add too much resistance to the focusing rack, as a side effect it will shake more when I operate the focus knob, which is also one of my issues!

The star diagonal is a bit of red herring of mine. The mirror can only damage the image.
I seldom use it. Only for eyepiece views around zenith. Otherwise I am fine exercising my neck for a night.

Re-reading your post, you seem to be comparing a visual in the EP with what you see in your camera display. Is the quality of the recorded image just as bad, viewed indoors on your monitor.
Yes. Loaded the clicks on my laptop and went back and forth between the eyepiece live and the image captured. Quite a difference.

Thing is, there are so many variables in this and any discussion can take us up many blind alleys until the 'cause' is found.
I am starting to realize this. I know I have to freeze some variables to get a meaningful direction. I already tried stacking - which was an improvisation from an exposure, but not anywhere near the EP visual. Next I want to sort out my setup hardware issues. Which should leave me with the optics, electronics and software.

For focus, live view can be a bit limiting unless you can magnify the viewer image. I focus my Pentax on maximum magnification and the results (the very few I have) are fine.
Well that's one more variable I thought can be eliminated if I take out live HDMI from the cam and view on an external screen. That would require a power source to the screen and a hassle to setup every session. But maybe worth trying once to see whether it turns out to be a factor here.

PS have you made a similar comparison with night time images?
No. I got hold of this cam very recently and the moon is absconding since then, never hated clouds until now.

PPS Could it just be camera shake? A remote release or, even better, a delay after the mirror settles down can improve fuzziness.
I have a remote release. But there could be invisible shakes, which I believe should be addressed by the stacking software. The stacking software did not improve over the EP view, but there is a catch: the movie I shot for stacking the frames is MP4, which is compressed / encoded. My cam unfortunately does not shoot RAW or uncompressed videos, only RAW stills. So I should probably try RAW images and stack them?
 
  • #10
Introducing more optics (two or three lenses in the EP) should only make things worse. The only things that should be better with EP viewing would be mechanical shake due to extra load. How well is the scope balanced, with the camera hung on it?
PhysicoRaj said:
I have a remote release
Do you jump for joy when you operate the remote?
If the mirror isn't locked up then its movement (clunk) isn't removed by remote. My DSLR has a 2s delay setting available which gives the mirror a chance to stop shaking.
Another thought: you say that focussing causes image shift so could there be some looseness elsewhere? This is grasping at straws because you might see the same effect through the EP.
Presumably you have recently taken good sharp pictures on your DSLR?
What about trying the experiment on other daytime images - like distant TV aerials?
Crazy idea but does the poor focus give coloured blurring? That could indicate extra sensitivity of your sensor plus poor chromatic aberration of the objective. The sensor should have an IR filter, of course. Has it been modded for astro?
You will have got accept a load of nonsense questions as I'm now thinking aloud. My thrashing around usually ends up with solutions so bear with it. Other PF'ers may have ideas too, if we keep the thread running near the top.
 
  • Like
Likes russ_watters
  • #11
sophiecentaur said:
Introducing more optics (two or three lenses in the EP) should only make things worse. The only things that should be better with EP viewing would be mechanical shake due to extra load. How well is the scope balanced, with the camera hung on it?
Its an alt-azimuth mount, so no counterweights that come with the scope. But since most of my usage is within 30 degrees from the zenith, I thought counterweights for balancing about the altitude axis wouldn't be required. The camera load is mostly along the focal axis. But I do have to mention that wind can get it to shake very easily. I don't see a significant difference in wind induced shake between camera and eyepiece.
Do you jump for joy when you operate the remote?
If the mirror isn't locked up then its movement (clunk) isn't removed by remote. My DSLR has a 2s delay setting available which gives the mirror a chance to stop shaking.
If my understanding is correct, at the beginning instance of every exposure, the mirror moves up and shutter opens irrespective of the delay behind it, if this is the 'clunk' you are talking about. When taking multiple exposures though, the clunk can creep into more than one exposure.

But recording a video and stacking should eliminate the shutter + mirror shake right?

Another thought: you say that focussing causes image shift so could there be some looseness elsewhere? This is grasping at straws because you might see the same effect through the EP.
Presumably you have recently taken good sharp pictures on your DSLR?
What about trying the experiment on other daytime images - like distant TV aerials?
I have no complaints on the EP view, it focuses fine. With the DSLR alone, no complaints, I've taken sharp images. With the DSLR + telescope, I have shot terrestrial objects and I felt satisfied but I haven't captured anything terrestrial that are like sunspots, i.e., that can show me the fine line between 'detailed' and 'not enough detailed'.

I will try to target something like that and do a similar comparison. This will be my next experiment.

Crazy idea but does the poor focus give coloured blurring? That could indicate extra sensitivity of your sensor plus poor chromatic aberration of the objective. The sensor should have an IR filter, of course. Has it been modded for astro?
I have observed some negligible chromatic aberration. But it does not vary between DSLR / EP.
No, my DSLR is unmodded, it retains the IR cut.

My thrashing around usually ends up with solutions so bear with it.
Not new to that one! :wink:
 
  • #12
PhysicoRaj said:
Its an alt-azimuth mount, so no counterweights that come with the scope
I was thinking of for and aft balance. The scope may work better if it's moved forward to compensate for the weight of the dslr.
PhysicoRaj said:
shutter opens irrespective of the delay behind it, if this is the 'clunk' you are talking about.
With the particular setting, you press the release, the mirror goes up, there's a delay and then the exposure is made. You could always look in the menu for exposure options. It's almost as good as the mirror lift lever that you used to get in high end mechanical cameras. That facility is easy to implement in an auto camera - could even be done with software but I guess they can charge more.
Live Vive view and video, both do that in any case.
PhysicoRaj said:
I haven't captured anything terrestrial that are like sunspots,
Car number plates and distant TV aerials (or distant street lamps at night) would be as sharp and contrasty as sunspots. I think exposure settings were discussed higher up already. I have problems setting my live view suitable for daytime / night time viewing because the sun can 'look' wrong in the day and stars are invisible at night without the right live view settings. You really need to eliminate possible camera cause before you panic about the scope. After all, the only thing that the scope has is a lens and a tube, as long as it's all firm enough.

Once this has been sorted out - and it will be - you'll wonder what all the fuss was about. I've been there several times. But you know if you spends more money £$£$£$£$£$ you can solve anything.
 
  • #13
sophiecentaur said:
I was thinking of for and aft balance. The scope may work better if it's moved forward to compensate for the weight of the dslr.
I need to think how to add a fore balance weight. But in most of my solar images the scope points almost upwards, I am not sure how much of a difference longitudinal balance would make in this regard. But generally for the scope, I have to think of a way to attach weights on the fore.

With the particular setting, you press the release, the mirror goes up, there's a delay and then the exposure is made. You could always look in the menu for exposure options.
Unfortunately my camera doesn't do this. I may have to go through the menu many more times but atleast for now I don't find a way to send the mirror up for a delay. But anyways, when using Liveview, it always stays up and only the shutter operates.

Live Vive view and video, both do that in any case.
I always use live view, so the mirror stays open. I checked again and even for multiple exposures, its only the shutter that opens and closes. So I think we can rule out mirror clunk?

Car number plates and distant TV aerials (or distant street lamps at night) would be as sharp and contrasty as sunspots. I think exposure settings were discussed higher up already. I have problems setting my live view suitable for daytime / night time viewing because the sun can 'look' wrong in the day and stars are invisible at night without the right live view settings.
I will shoot these next for comparison. I have been shooting only in the day from a week so I think my eyes have accustomed to the view on the screen. Live view settings are constant for the time being.

But you know if you spends more money £$£$£$£$£$ you can solve anything.
There might not be an end to it. I really stretched my budget to get these beginner equipment for learning. Well, atleast spending less money meant I'm learning the hardcore way! Who knows if I'd be thinking of all these technicalities if I had a pricey scope and accessories.
 
  • Like
Likes Tom.G
  • #14
RAW files store a lot more information in the dark areas of the photo than JPG files... this is because JPGs are limited to 256 levels of brightness per color channel versus RAW files that often have 4096 levels of brightness per color channel or more. It's more of an art than a science, but the final look and feel of your images will be greatly influenced by the settings you choose in the software you use to convert from RAW to JPG. Essentially you're deciding which of the 4096 brightness levels to "throw out" when you convert to 256 brightness levels limit of a JPG. On the left below is a JPG straight from the camera and on the right below is a RAW file converted to JPG in Adobe Lightroom with custom conversion settings. Generally you will need different conversion settings for every different image to obtain optimal results, and that part of the process is more of an art than a science... You can see below there's a lot of information in the image on the right converted from a RAW file that isn't visible in the straight-from-camera JPG (the grass for example).

DSC_3078-2.jpg


https://www.speakev.com/attachments/img-3033-gif.149501/

5E11FD92-549D-4E1C-A0CF-8483AD47AEAC.jpeg


B8E4128B-AD5E-4401-A977-D73A900E1BC3.jpeg
 
  • Like
Likes DrClaude
  • #15
Devin-M said:
RAW files store a lot more information in the dark areas of the photo than JPG files... this is because JPGs are limited to 256 levels of brightness per color channel versus RAW files that often have 4096 levels of brightness per color channel or more. It's more of an art than a science, but the final look and feel of your images will be greatly influenced by these settings you choose in the software you use to convert from RAW to JPG. Essentially you're deciding which of the 4096 brightness levels to "throw out" when you convert to 256 brightness levels limit of a JPG. On the left below is a JPG straight from the camera and on the right below is a RAW file converted to JPG in Adobe Lightroom with custom conversion settings. Generally you will need different conversion settings for every different image to obtain optimal results, and that part of the process is more of an art than a science... You can see below there's a lot of information in the image on the right converted from a RAW file that isn't visible in the straight-from-camera JPG (the grass for example).
I see. Today I compared a JPG off my camera to a PNG that was made by processing RAW from the same camera through an editor and was mind-blown. I could bring out maybe 10x the details by just adjusting levels and curves.

I plan to take several RAW shots and stack them, and compare them to the stacked frames from MP4. That could be the missing link between the capture and eyepiece view.

Thanks.
 
  • Like
Likes sophiecentaur
  • #16
PhysicoRaj said:
I need to think how to add a fore balance weight.
People tend not to use added weights for for/aft balance. It usual to move the scope in the scope rings or on the dovetail clamp.
 
  • #18
The mirror flip will vibrate your whole telescope causing motion blur on the image. What you can do is turn on exposure delay mode to 3 seconds which let's the mirror vibration subside for 3 seconds prior to shutter release. Then turn on interval timer shooting mode if you don’t have an external intervalometer so you can take multiple exposures without having to touch the camera which is another source of camera shake. I recommend getting external intervalometer so you can program longer than 30 second exposures like 2 minutes or 5 minutes in bulb mode but for that you’d also need an equatorial mount.

3B45E372-EF78-485C-8A2E-2056B782505C.jpeg

970CBA1D-9F64-4EFF-9AEE-8D83782310E1.jpeg
 
  • Like
Likes sophiecentaur
  • #19
PhysicoRaj said:
Ah yes. I'm not suggesting you spend more money on this but you could choose to use scope rings and an alternative fixing. See the Newtonian scopes on the left of that page) you can balance pretty much anything.

I'd be inclined to do all your learning on what you've got. Then you can decide to give it up with not too much cost or decide on a life of poverty and max out your credit cards on extravagance.

I would recommend considering second hand gear though. You can get perfectly good stuff at half the shop prices. Astronomers are pretty careful people and tend to look after their gear.
 
  • #20
Another tip when you’re focusing in Live View mode is you’ll want to zoom into the live view mode to 100% so you can see the focus on the individual pixels and not the image overall.
 
  • Like
Likes PhysicoRaj and sophiecentaur
  • #21
PhysicoRaj said:
I could bring out maybe 10x the details by just adjusting levels and curves.
All the big boys use levels and curves (they may call it names like stretching and histogram but it's the same thing, basically.) JPEG is really not good if you want to reveal stuff that it has done its best to suppress, in the interest of data reduction. JPEG was invented for regular pictures which astrophotographs are definitely not.
 
  • #22
 
  • Like
Likes PhysicoRaj
  • #23
Devin-M said:
The mirror flip will vibrate your whole telescope causing motion blur on the image. What you can do is turn on exposure delay mode to 3 seconds which let's the mirror vibration subside for 3 seconds prior to shutter release. Then turn on interval timer shooting mode if you don’t have an external intervalometer so you can take multiple exposures without having to touch the camera which is another source of camera shake. I recommend getting external intervalometer so you can program longer than 30 second exposures like 2 minutes or 5 minutes in bulb mode but for that you’d also need an equatorial mount.

View attachment 288880
View attachment 288881
I have an Intervalometer, I take exposures with that. And after checking, my camera doesn't have this exposure delay mode. What I do is I keep LiveView all the time so the mirror is always flipped up for the entire session. I saw that only the shutter operates for every exposure.

The only source of camera shake right now is when I frequently touch the focus knob. The scope keeps drifting out of focus due to the weight of the cam.

sophiecentaur said:
Ah yes. I'm not suggesting you spend more money on this but you could choose to use scope rings and an alternative fixing. See the Newtonian scopes on the left of that page) you can balance pretty much anything.

I'd be inclined to do all your learning on what you've got. Then you can decide to give it up with not too much cost or decide on a life of poverty and max out your credit cards on extravagance.

I would recommend considering second hand gear though. You can get perfectly good stuff at half the shop prices. Astronomers are pretty careful people and tend to look after their gear.
My basics were not great when I ordered this scope. I will see if I can find something to 'adapt' the existing mount to a DIY ring clamp, if not Second hand gear is a good idea!

Devin-M said:
Another tip when you’re focusing in Live View mode is you’ll want to zoom into the live view mode to 100% so you can see the focus on the individual pixels and not the image overall.
Great :redface:. I did not think of it because I was so absorbed in seeing the full Sun fit into the frame and seeing all parts of the image at once. Thanks for the tip. Since I do not have a tracking mount, the image keeps moving out of frame, so I will have to zoom in and out frequently for framing and focusing I guess.

sophiecentaur said:
All the big boys use levels and curves (they may call it names like stretching and histogram but it's the same thing, basically.) JPEG is really not good if you want to reveal stuff that it has done its best to suppress, in the interest of data reduction. JPEG was invented for regular pictures which astrophotographs are definitely not.
Lesson learnt. I will try stacking RAW captures next time and post processing to a lossless format.

Devin-M said:

That is a lot of mechanics there! Tuned spring-mass-damper system. Never realized that. I wonder if there is a way to disable the Mirror (thus viewfinder) action completely? I never use the viewfinder but being a beginner to photography and DSLR's I don't know if that sounds stupid.
 
  • #24
I’d duct tape it once you have it in focus.
 
  • Like
Likes sophiecentaur
  • #26
Some other good info:

"Some shutter mechanisms are linked to the mirror mechanism and requires a full cycle of both in order to recock the shutter. Since they are linked then the mirror must cycle down and back up even when shooting in live view so a subsequent shot can be taken.

Other models have a decoupled mirror mechanism that allows the shutter to fire and recock without moving the mirror. For example, Canon Rebels (XXXD and XXXXD) require a full cycle of the mirror during live view shooting while XXD and XD lines don't.
"

https://www.dpreview.com/forums/thread/3952158
 
  • #27
Great! I removed the lens and watched what happened inside with the appropriate settings turned on and I am able to get only the shutter to fire without the whole mirror assembly banging around. It also feels 'softer' everytime it fires in LiveView. I think this part of the issue is fixed.

Are you suggesting me to duct tape the focus tube to prevent it from drifting out of focus? That might be a good idea to try.

@sophiecentaur , I tried terrestrial shooting today, took RAW images this time (it was a distant signboard with contrasting colors). I think I feel the same way I felt with my solar images. The EP view is more detailed and/or sharper than even the uncompressed prime focus image. I even tried levels+curves on the raw to see if I can bring out the details and sharpness like the EP view, but no.

Next I am stacking the uncompressed shots (~25 shots) I took of the signboard to see if that comes close to the EP view. WIP, will post once done.
 
  • #28
PhysicoRaj said:
Are you suggesting me to duct tape the focus tube to prevent it from drifting out of focus? That might be a good idea to try.

Yes.

PhysicoRaj said:
I tried terrestrial shooting today, took RAW images this time (it was a distant signboard with contrasting colors). I think I feel the same way I felt with my solar images. The EP view is more detailed and/or sharper than even the uncompressed prime focus image. I even tried levels+curves on the raw to see if I can bring out the details and sharpness like the EP view, but no.

I posted this before. For $10/mo you can get adobe lightroom which is great for converting raw files. It’s sort of a dark art but below you can see an animation of editing the horsehead nebula to bring out the detail and contrast.

Devin-M said:
I made an animated GIF of histogram stretching a stacked 16bit tif of the Horsehead & Flame Nebulas in Adobe Lightroom (60x 2min 1600iso 600mm f/9 ff-dslr + 40 darks & flats, bortle 6):

1-ezgif-6-d4ce0ecf4182-gif.gif
 
  • #29
Devin-M said:
I posted this before. For $10/mo you can get adobe lightroom which is great for converting raw files. It’s sort of a dark art but below you can see an animation of editing the horsehead nebula to bring out the detail and contrast.
Let me show you what tools I use so that you can advise better:

1. RAW (CR3) to PNG/TIFF conversion: I wrote a python script based on the rawpy module
2. Stacking and pre-processing: DSS or AS!3
3. Post processing: Krita on Ubuntu (free)
[Note: reason I do (1) is because AS!3 does not read CR3 data. For DSS, (1) is not necessary. Also, if I want to work with a single unstacked image, Krita in (3) cannot read my CR3 files]

(1) and (3) are done on a linux PC while (2) is done on a windows PC . I know this is a mess, but my main PC is a linux-only machine for my work purposes and I cannot let compute hungry software take resources from this. Also, I am on a 'save-every-penny' phase in life right now, so my start into this hobby is more learning-oriented and spending only where its absolutely needed. I hope this phase is as transient as possible.

Krita has worked very well for levels, curves and sharpening (which seem to be enough for now) but it is not limited to them. Since I was satisfied enough and add to it the fact that I already have this on my Linux platform, I did not bother about the Adobe stuff out there.
 
  • #30
You'll probably want go go from RAW to 16 bit TIF for stacking, 16 bit TIF to 16 bit TIF for histogram stretching and then 16 bit TIF to 8 bit JPG for final display.
 
  • Like
Likes PhysicoRaj
  • #31
Devin-M said:
You'll probably want go go from RAW to 16 bit TIF for stacking, 16 bit TIF to 16 bit TIF for histogram stretching and then 16 bit TIF to 8 bit JPG for final display.
I would prefer to process in 16b TIFF, but AS!3 would not process them (or maybe I am doing something wrong), which I have to figure out. Thus, I am working with 16b PNG now. I read that PNG is a lossless compression format so I hope it is not a bad alternative to TIFF?

And what do you think of the CR3 to TIFF conversion - are there different TIFF formats that could store more or less information depending on the way it is converted?
 
  • #32
Update:

As @sophiecentaur suggested, I did some terrestrial shooting and compared images before and after stacking and post processing with the EP view. Here are the results.

Experiment #1:
1) Shoot multiple exposures of a terrestrial target at prime focus in RAW
2) Process ONE of the RAW images to get a stretched and contrasty uncompressed image
3) Digitally magnify the image to nearly the size it is in EP view
4) Compare both the image and live EP view

Result #1: EP view is better than the processed single exposure.

Experiment #2:
1) Stack and align ALL of the raw exposures in AS!3 and obtain an uncompressed image
2) Process the image to stretch histogram and adjust curves
3) Digitally magnify the image to nearly the size it is in EP view
4) Compare both the image and live EP view

Result #2: Stacked and processed image is close enough in detail (or maybe even a bit more detailed) than live EP view!:partytime::biggrin:

Here are the images: [please note that images here are not full size but I have zoomed in on the region of interest to understand the difference]

The single exposure processed prime focus image with region of interest for details marked in red:
IMG_0194.jpg


Single exposure: the red part zoomed in digitally to view the details:
interest.png


Stacked image: red part zoomed in digitally to view the details:
final_test.png


So, this means, the resolution was enough in the prime focus image, only that a single RAW exposure could not capture the resolvable details. Nor was the movie used to stack the frames able to capture it as the movie was a compressed format.

I would like to hear your thoughts on the results of this experiment.
 
Last edited:
  • #33
PhysicoRaj said:
I did some terrestrial shooting and compared images before and after stacking and post processing
I've read about some terrestrial photographers who stack their pictures but not using a movie sequence. (Image enhancement for reading crooks' car number plates seems to work brilliantly on the Cinema so it may be used more than you'd think) The Seeing, if the the weather is half decent wouldn't benefit from it. I was just suggesting you could find any inadequacies of your basic system (60mm Objective etc.) from pictures of small, contrasty objects. I was clutching at straws, mainly.
You'll just have to wait for a good day and take a dozen or so Raw shots of the Sun and see how much improvement you can get. Your experience just goes to show how good our brains can be at dragging out important details from 'average' quality images.

*But the improvement in apparent resolution you got from stacking was notable on the printing on that sign.
 
  • #34
sophiecentaur said:
The Seeing, if the the weather is half decent wouldn't benefit from it. I was just suggesting you could find any inadequacies of your basic system (60mm Objective etc.) from pictures of small, contrasty objects. I was clutching at straws, mainly.
I get it. But it still went ahead to prove that the EP view can be replicated. Not that I have not seen stacked images with improved quality, but nowhere could I find enough evidence that I can see something through the EP and also capture it (without EP projection) at the same quality, particularly after I saw a blatant difference in my sun image.

I could've easily tried this on the Sun, but its 100% overcast here, forecasted to remain the same for the next 3 days :frown:.

My main concern was that the details I saw in the EP view were lost either in between the sensor pixels or lost by the capability of the objective optics, but this experiment proved otherwise (correct me if wrong).

Your experience just goes to show how good our brains can be at dragging out important details from 'average' quality images.
It has certainly got my perspective changed on astrophotography. When I started the thread I took the EP view for granted, without giving credits to the brain. Rather, thought that the sensor at prime focus was picking up a really bad image.

I think I suspect my optics much less now thanks to this.

I still need to do the same process to the Sun and compare with the picture in #1 to be sure. And then also work on my mechanical issues in focus and balance, that should help me extract the last drop!
 
  • #35
PhysicoRaj said:
but nowhere could I find enough evidence that I can see something through the EP and also capture it (without EP projection) at the same quality,
This would go against "everyone else's" experience over history and nowadays. Since the earliest photos were taken we have seen so much more than we used to through EPs (me in my back garden too), so it's not just better objective lenses.

I guess it must all be down to your elven ancestry; you're just not like us ordinary mortals. :wink:
 

Similar threads

Replies
27
Views
5K
Replies
39
Views
6K
Replies
2
Views
2K
Replies
9
Views
3K
Replies
8
Views
3K
Replies
1
Views
12K
Replies
2
Views
6K
Back
Top