Level of details in prime focus vs eyepiece images

In summary: You will have poorer results if you are not stacking images - I suggest looking at a youtube video or two on how to take a short movie of the sun and stack the frames within it - there are multiple free tools for the job. The final thing in your list is really the matching of the chip pixel size to the image size but you need to look at the other bits first as they are the most significant and you should be able to get some decent results.
  • #141
Planetary images are best with very long lenses (2,000mm) to fill more of the sensor.
 
Astronomy news on Phys.org
  • #142
sophiecentaur said:
Planetary images are best with very long lenses (2,000mm) to fill more of the sensor.
This one was at 2100mm, but the sensor is a DSLR which is bigger than the planetary webcams so the size of the planet relative to the frame is small.

Also, the lens and Barlow quality is not that great, so the rings are not clear. I will probably move to a used bigger scope and better optics, I'm just fiddling around with the present one till I save up some bucks. At least I was able to partially resolve the rings, let's see how much juice I can extract from this scope while I'm at it.

Edit: BTW, do you think looking at the image that my barlow is actually 3x?
Image size: 1920x1080 (3.7 micron pixel)
Cropped to: 1000x1000
Objective FL: 700mm
Barlow: 3x (?)

I have seen some images of Saturn at 2000mm like this and it is much larger. Even though that image is 1280x720, the proportion by which the planet is bigger than mine seems to indicate that my Barlow could be a 2x.
 
Last edited:
  • #143
The rule is “Spend spend spend” I’m afraid.
 
  • #144
sophiecentaur said:
The rule is “Spend spend spend” I’m afraid.
It is rather scary. But now I am starting to believe that Celestron might have kind of duped me with this scope. Check the edit on my last post.

Even without a barow, my aperture seems to have a dawes limit resolution that is larger than what the focal length can extract. On top of it they gave a Barlow which seems ridiculous unless my math is wrong.
 
  • #145
PhysicoRaj said:
Image size: 1920x1080 (3.7 micron pixel)
Objective FL: 700mm
Barlow: 3x (?)
In my earlier test shot of Saturn which appeared to have similar detail, I was shooting at 1/3.5 the focal length (600mm f/9), but the imaging sensor on the D800 was 7360x4912… 3.8x the sensor resolution… The following shot (which I posted before) has been cropped & also upscaled with interpolation.

https://www.speakev.com/attachments/saturn_stacked_mono_green2-gif.150147/
If you crop the image before uploading it, the planet will have a larger final display size on the web page without sacrificing resolution. This website is downsizing your final image to below 1920x1080 when you upload it to conserve bandwidth.

For best final display you’ll want to determine what size this website downsizes your final image to, and then do a 100% crop at those same dimensions before you upload it.

You could also host the image elsewhere and include a link to it to avoid the final downsizing that occurs after you upload it.
 
Last edited:
  • #146
Devin-M said:
In my earlier test shot of Saturn which appeared to have similar detail, I was shooting at 1/3.5 the focal length (600mm f/9), but the imaging sensor on the D800 was 7360x4912… 3.8x the sensor resolution… The following shot (which I posted before) has not only been cropped but also upscaled with interpolation...
I have cropped a bit - just checked that the image I have uploaded is 1000x1000. Note that I am not worried about planet resolution, but the size of the planet in relation to the frame size. But me cropping it should only give an even bigger planet. And your frame size being 3.8x the size of mine before cropping, and 3.5 times lower focal length, I should have definitely gotten a bigger relative size of the planet. Something definitely seems off.
 
  • #147
I think it’s as simple as cropping out the empty space around the planet before uploading and what remains will be displayed at a larger apparent size.
 
  • #148
PhysicoRaj said:
Image size: 1920x1080 (3.7 micron pixel)
Doesn’t your camera shoot in RAW mode higher than 1920x1080?
 
  • #149
PhysicoRaj said:
And your frame size being 3.8x the size of mine before cropping, and 3.5 times lower focal length, I should have definitely gotten a bigger relative size of the planet.
I’m not sure this is all accurate… we haven’t factored the different sensor size… I was using a 35mm sensor.
 
  • #150
Devin-M said:
I’m not sure this is all accurate… we haven’t factored the different sensor size… I was using a 35mm sensor.
I use a crop sensor, 1.6x. Now the more cropped it is the bigger my planet has to be, so I'm even more suspicious now.

Devin-M said:
Doesn’t your camera shoot in RAW mode higher than 1920x1080?
It does, but the FPS is low. I use a lower size to get more FPS.
 
  • #151
Devin-M said:
I think it’s as simple as cropping out the empty space around the planet before uploading and what remains will be displayed at a larger apparent size.
Exactly, for the focal length I used (2000mm), smaller sensor (1.6x) and the crop I did (1920p to 1000p), I expected a larger apparent size of the planet but I am seeing less. Your image, is lower focal length (600mm), bigger sensor (1.6x than mine) but still has larger apparent size than mine. How much did you crop the image before uploading here?
 
  • #152
I cropped mine a lot.

I just cropped and enlarged your image and got this which looks very similar to mine.

807AE10C-2479-4275-ADBB-312AE230C6CF.jpeg


mine for comparison:

https://www.speakev.com/attachments/saturn_stacked_mono_green2-gif.150147/
 
  • #153
With mine I shot in RAW mode at 7360x4912 not video mode at 1920x1280… you probably lost quite a bit of detail by doing that.
 
  • #154
Well yes, makes sense then.

I think the loss of detail is because my camera records only MP4, even in max video resolution of 4K. I don't have an option of uncompressed AVI or SER or even lightly compressed MKV/MOV.

The 4K recording is 25 fps which seems very low, so I went to 50 fps that gives 1080p HD.

Provided the planet occupies a fixed number of sensor pixels at a given focal length, the video size is effectively just cropping it down, but the compression of MP4 is insanely lossy.
 
  • #155
PhysicoRaj said:
I think the loss of detail is because my camera records only MP4, even in max video resolution of 4K. I don't have an option of uncompressed AVI or SER or even lightly compressed MKV/MOV.
There you go! Camera characteristics are things you just have to buy your way out of. You could take a change of direction for a while and image different objects - objects more suited to your camera lens and sensor. There is no shortage of them and you can get some very satisfying stuff - particularly because you can be looking up, rather than near the boiling horizon for planets.
One day you can spend a load of money on an appropriate OTA, Mount and Camera but you will never get the sort of planetary images that you crave with what you have. That's just being pragmatic.

PS I was wondering whether a large set of still images might give you enough for dealing with 'planetary problems' and give you inherent high res.
 
  • #156
I wouldn’t be so sure that when you switched to 1080p that the image was “cropped.” “Resized” is more likely which, if true, threw away a very significant amount of the resolution of the planet.

My D800 which has 7360x4912 resolution also shoots video in 1080p, but it doesn’t “crop” the full frame, it “resizes” the full frame. So in my camera’s case If I was shooting in 1080p, I’d be starting with 4912 pixels in the vertical axis but in 1080p mode I’d be down to 1080 pixels in the vertical axis in other words the resolution in the vertical axis would only be 1/4.5 as high as the maximum possible reosolution if I had made that choice.
 
  • #157
Devin-M said:
So in my camera’s case If I was shooting in 1080p, I’d be starting with 4912 pixels in the vertical axis and after resizing I’d be down to 1080 pixels in the vertical axis
That is what I (and most other digital photographers?) would call cropping, which loses information. Re-sizing is just altering the size of a displayed image. Re-sizing can involve cropping when you are displaying an image with a modified aspect ratio without distorting.
 
  • #158
Cropping would be when you remove pixels only from the edges of the image (like if the 1080p came only from the central pixels of the sensor, all of which are preserved except the chopped off edges) which won’t change the resolution of the planet. Resizing is when you throw away pixels in between other pixels which does change the resolution of the planet. At least on my camera, If I shot in 1080p, my Saturn resolution would only be 1/4.5 as high as shooting in RAW mode.
 
  • #159
Devin-M said:
Cropping is when you remove pixels only from the edges of the image (like if the 1080p came only from the central pixels, all of which being preserved) which won’t change the resolution of the planet, resizing is when you throw away pixels in between other pixels which does change the resolution of the planet. At least on my camera, If I shot in 1080p, my Saturn resolution would only be 1/4.5 as high as shooting in RAW mode.
I would say you are using the terms in an uncommon way. Cropping gets rid of information (no question of that because there will be pieces of the photograph that end up on the floor. In the case a a picture of a planet, you are sort of lucky that the background stars may not be what you wanted (but what about the Jovian moons?). Any loss of information may have consequences.

Describing re-sizing as 'throwing away pixels' is not accurate, or at least a really bad way of doing it. If you want to alter the (spatial) sampling rate (increasing or decreasing), an algorithm will make best use of the resulting samples by interpolation and will lose no information. If you have a small image of a planet, with poor resolution, and you want a bigger one, you will not be throwing away anything but going for the best interpolation formula. Repeating samples would be a really naff thing to do. Those two descriptions you used will only be even possible for changes with integer ratios of pixel spacing.
 
  • #160
I should be more clear… resizing to a smaller size throws away information which is what I suspect was done here. Resizing to a larger size doesn’t necessarily lose any information especially when interpolation is disabled. Resizing to a larger size with interpolation reduces sharpness.
 
  • #162
One way you can test if you’re losing resolution in 1080p mode…

take a short clip in 4k mode, and then another clip in 1080p mode… if they both have the same framing you know you lost resolution because the image wasn’t cropped, it was resized to smaller dimensions which will negatively affect the resolution of the planet.

In other words, if the framing stays the same when going from 4k to 1080p (ie all the same objects are still in frame in the same positions), it means you threw away pixels in between other pixels (resizing smaller) rather than only throwing away pixels from the edges of the sensor (cropping).
 
Last edited:
  • #163
Here I've substituted a hummingbird for a planet to demonstrate the different display options.

The 1st thing to consider is this site will resize any image you upload to no more than 620 height or 800 width. Knowing this fact, how do you shoot and process the image to get the highest angular resolution on the target in the final display environment?

Here are several examples:

1) Full frame image (7360x4912 jpg), uploaded and reduced by the server to 800 width:
620p.jpg


2) Shot in simulated 1080p HD 16x9 ratio, uploaded and reduced by the server to 800 width:
1080p_to_620p_16x9.jpg


3) Full frame image (7360x4912 jpg), cropped to 620 height, 3x2 ratio prior to uploading, reduced by server to 800 width:
4912p_cropped_to_620p_3x2.jpg


4) Shot in simulated 1080p HD 16x9 ratio, cropped to 620 height, 3x2 ratio prior to uploading, reduced by server to 800 width:
1080p_cropped_to_620p_3x2.jpg


We can see from the above demo that for highest angular resolution in the final display, option 3 is best-- "Full frame image (7360x4912 jpg), cropped to 620 height, 3x2 ratio prior to uploading."

That would be the equivalent of shooting in RAW mode, then cropping (not resizing) the image to 620 height (or 800 width), and then uploading to the server.
 
  • #164
Devin-M said:
Here’s a good article on resizing vs cropping…

https://www.photoreview.com.au/tips/editing/resizing-and-cropping/
I wouldn't describe that article as good. It says that resizing means 'throwing away pixels'. As I mentioned before, any photographic processing software worth its salt never just throws away pixels. the individual pixel element values are samples of the original scene. To resize an image requires the appropriate filtering in order to minimise any loss of information or creating distortion of spatial phase or frequency of the components of the original image. The 'appropriate filtering' basically starts by reconstructing the original image (akin to the low pass audio filter which gets rid of the sampling products from an audio ADC). This image can be reconstructed perfectly if the original sampling has followed the rules (Nyquist) and it can be resampled downwards by applying a further Nyquist filter. Nothing in the spectrum below the new Nyquist frequency need be lost and you will get a set of new pixels (samples) that should not show any pixellation once displayed with the appropriate post filtering.
Note: The process of re-sampling that involved just leaving out or repeating samples was last used in the old days of movie film when the length of a film sequence or its shooting rate, needed to be projected at standard rate. Then, frames were crudely repeated or deleted.All the information in an image that's been resampled can be reproduced perfectly except when reducing the sample rate (number of pixels) because the Nyquist Criterion has to be followed by suitable pre-filtering of the first stored image. Actually, simple Nyquist filtering is not always even necessary for some images because aliases are not necessarily an impairment. Aliases in normal photographs can be much more of a problem because of regular patterns which we don't see in astrophotography. Intelligent image processors can deal with a lot of that.
Calling all this 're-sizing' is misguided and over simplistic . This process of 'zooming' is re-sampling and using the right term makes it clear what's going on
 
  • #165
Now if I enlarge and crop option 4...
Devin-M said:
4) Shot in simulated 1080p HD 16x9 ratio, cropped to 620 height, 3x2 ratio prior to uploading, reduced by server to 800 width
1080p_cropped_to_620p_3x2_enlarged.jpg


To the same apparent size as option 3...
Devin-M said:
3) Full frame image (7360x4912 jpg), cropped to 620 height, 3x2 ratio prior to uploading, reduced by server to 800 width
4912p_cropped_to_620p_3x2.jpg


...the loss of image quality in option 4 (1st picture) can be easily observed, which I believe is the same loss the OP experienced by shooting in 1080p HD mode rather than RAW...
 
  • #166
Devin-M said:
Now if I enlarge and crop option 4...
What I see is the same viewed image size at different resolutions (pixel size ) and subjected to some form of processing which has a name but no definition.

If you fire up your Photoshop or equivalent and load an image. Go to the 'crop' tool and it will allow you to select a portion of the full image. Unless it thinks it knows best what you want, you will be left with the portion you chose and it will have the same pixel size. That is why I call cropping cropping. If you choose to expand to fill the screen, the image will (should ) have the same pixel dimensions. You can change the pixels per inch in Image Size option. In PS you can 're-size' the image to fit whatever printed image you might want and you also have a choice of pixels per inch. The two quantities - size and resolution are independent.
As far as I'm concerned, Adobe is God in these matters and their notation is pretty universal. Their image sizing can be done with various algorithms iirc.
 
  • #167
To "crop" but not also "resize" in photoshop you have to choose the ratio of the crop but not the final pixel dimensions... So then if you know the final display will be 800px width, and your crop in ratio mode ends up at 800px width, then you'll get a 1 to 1 ratio of sensor pixels to display pixels in the final image which should result in the lowest possible degradation in quality if you're imaging a low angular dimension object like Saturn.

ratio-crop.jpg
 
  • #168
Devin-M said:
Resizing to a larger size doesn’t necessarily lose any information especially when interpolation is disabled.
I read this again and, in the context of PS etc. it doesn't really mean anything unless you specify whether or not the pixel count of the image is increased so as to keep displayed pixel size the same. I can't think how you would be able to achieve any arbitrary value of resizing without some form of interpolation filtering. The positions of the original samples were defined by the source image array. How could you 'resize' the image just by adding or subtracting a pixel, every so often?
 
  • #169
So if you know the final display width is 800px width, then while in ratio crop mode (in this case 3:2) you select an area which is 800px in width and you'll be cropping without resizing.

799px.jpg
 
  • #170
Devin-M said:
So then if you know the final display will be 800px width, and your crop in ratio mode ends up at 800px width, then you'll get a 1 to 1 ratio of sensor pixels to display pixels in the final image which should result in the lowest possible degradation in quality if you're imaging a low angular dimension object like Saturn.
I think you are underestimating the capabilities of processing apps these days. I now see what you were getting at. You are implying that you have to choose your scaling so the pixels have an integer ratio. If it were as simple a system as you imply then how would a photographer be able to mix images of arbitrary original sizes and pixel resolutions and scale / distort them so that the result doesn't show the jiggery pokery? The processing has to go far deeper than that by dealing with reconstructed internal images before there would be any chance of PS doing the excelling editing job it does.
There is no harm in doing PS's thinking for it but why? You would have a serious problem stitching images of a large object like the moon together, for instance.

BTW is that your shot of the humming bird? Nice and you are a lucky devil to have them around.
 
  • #171
sophiecentaur said:
I can't think how you would be able to achieve any arbitrary value of resizing without some form of interpolation filtering. The positions of the original samples were defined by the source image array. How could you 'resize' the image just by adding or subtracting a pixel, every so often?
Non-Interpolated Resize (Enlarge) "Nearest Neighbor":
non-interpolated.jpg


Interpolated Resize (Enlarge):

interpolated.jpg
 
  • #172
sophiecentaur said:
BTW is that your shot of the humming bird? Nice and you are a lucky devil to have them around.
Thank you, yes I took these in my backyard in RAW with a Nikon D800, Nikon 300mm f/4.5 w/ a Nikon TC-301 2x teleconverter for effective 600mm f/9 on a cloudy day at about 1/500th sec 6400iso.
 
  • Like
Likes sophiecentaur
  • #173
It looks like tonight is my only chance this month before the moon comes back…

5FD89ECC-ED57-430A-BEE8-490B722C6999.jpeg
 
  • #174
Devin-M said:
So if you know the final display width is 800px width, then while in ratio crop mode (in this case 3:2)
That's interesting. So it's the PS adventure game! I can't find that particular door. From the image you posted of a PS screen, that box should drop down from the Edit button? My Resize button gives me the usual size and resolution options. Where does the other list come from?
Is it a plug in?

I still don't think that sort of special under sampling will deal with many of the functions that we use for PS - even a simple trapezium stretch will change the frequencies and the effective pixel spacing so there's no longer a simple ratio.

People seem to be trying to use inadequate equipment, imo. Raw images, Tiff and AVI are worth paying for for 'show pictures'. There are many low cost CMOS and CCD cameras which can be driven by a bog standard laptop (not always conveniently by macOS, though! grr. The sensor on an amateur DSLR is too big or not HD enough unless you use a very expensive scope.
 
  • #175
The "ratio" refers to the ratio of the # pixels in the width vs # of pixels in the height... So whatever you crop in ratio mode, the pixels that remain should remain unchanged (unresized) from the cropping operation (if you used 3:2 ratio the height will be 2/3rds the width - which is standard DSLR / mirrorless framing). The reason in this case you would choose 800 pixels for the width while cropping in ratio mode is if you choose a larger number, it will be downsized by the max 800px width on this website. If you choose an area that's smaller than 800px in ratio mode, that also won't change the original pixels, but the image won't fill the max available space on this website. So if you crop to 800px width or less in ratio mode (and the height is less than 620px) you will end up with each pixel shown on this site corresponding with a single pixel from the image sensor.
 

Similar threads

Replies
27
Views
5K
Replies
39
Views
6K
Replies
2
Views
2K
Replies
9
Views
3K
Replies
8
Views
3K
Replies
1
Views
12K
Replies
2
Views
6K
Back
Top