Is there a way to improve CRT raster scan efficiency by scanning both ways?

  • Thread starter artis
  • Start date
  • Tags
    Crt
In summary, the conversation discusses a proposed modification to the original electron gun raster scan pattern used in television displays. The modification would have the scan pattern move from left to right and then back from right to left in alternating rows, eliminating the need for the electron beam to reset and saving time. However, the idea faced objections due to potential issues with non-parallel lines and the impact on existing technology. The conversation also touches upon the historical development of the scan pattern and its influence on reducing flicker.
  • #36
tech99 said:
The pre War developments also included mechanical scan for larger screen projection, and this would not lend itself to zig-zag scan. For instance, at RadiOlympia in 1938 a large screen mechanical scan 405 line receiver made by Scophony was displayed. It utilised small, high speed mirror drums and used an opto-acoustic light modulator, called the Jeffree Cell. https://blog.scienceandmediamuseum....cophony-tv-receiver-high-speed-scanner-motor/
Another serious objection to zig-zag scan is that the scan must be perfectly linear to a precision of one pixel, or vertical edges will be ragged, and such linearity was impracticable.
The point about scanning, however you do it, is that it is a sampling process. Any sampling introduces artefacts which can make re-construction problematic. Conventional scanning was selected because it was convenient (a spinning disc, initially) and that led to a relatively simple sawtooth horizontal and vertical scan. The artefacts are 'predictable'. If you try to scan in other ways, the vertical and horizontal sample frequencies are no longer uniform so the nice, friendly 'comb' spectrum of a PAL signal is destroyed. The timing suddenly may need to be much better (the 'pixel accuracy' of @tech99 may come in).
The downside of the conventional scan for TV tubes is, as people have mentioned, the enormous power needed for sawtooth deflection, and the scan linearity with wide angle tubes. But that was more or less sorted out with large sweaty power circuits.
Repeat scanning is terrible value for bandwidth use, if the picture is actually transmitted in the same form that it's detected and displayed, as in conventional transmission.
Once you have modern digital signal processing, the same basic scanned picture can be compressed into a tiny channel compared with the 7MHz or whatever for old fashioned TV. In that situation, a picture is a picture and can be imaged or displayed in any way you choose. Transmission becomes a different issue.
 
Engineering news on Phys.org
  • #37
sophiecentaur said:
The point about scanning, however you do it, is that it is a sampling process. Any sampling introduces artefacts which can make re-construction problematic. Conventional scanning was selected because it was convenient (a spinning disc, initially) and that led to a relatively simple sawtooth horizontal and vertical scan. The artefacts are 'predictable'. If you try to scan in other ways, the vertical and horizontal sample frequencies are no longer uniform so the nice, friendly 'comb' spectrum of a PAL signal is destroyed. The timing suddenly may need to be much better (the 'pixel accuracy' of @tech99 may come in).
The downside of the conventional scan for TV tubes is, as people have mentioned, the enormous power needed for sawtooth deflection, and the scan linearity with wide angle tubes. But that was more or less sorted out with large sweaty power circuits.
Repeat scanning is terrible value for bandwidth use, if the picture is actually transmitted in the same form that it's detected and displayed, as in conventional transmission.
Once you have modern digital signal processing, the same basic scanned picture can be compressed into a tiny channel compared with the 7MHz or whatever for old fashioned TV. In that situation, a picture is a picture and can be imaged or displayed in any way you choose. Transmission becomes a different issue.
I think that along a scan line we do not use sampling; there is no fundamental limit to resolution other than spot size. Although that creates a low pass filter action, that is not a sampling process. It is an analogue system in that respect. In the vertical direction we do have sampling and the max spatial frequency is then restricted to half the number of lines. As I have mentioned before, sampling doubles the required bandwidth.
Compression of TV signals relies on exploiting limitations of human vision, so we are actually robbing the recipient of information or placing constraints on what may be displayed. It cannot defeat the laws of Nature, such as Shannon. I am a bit out of date here, but I think a full quality digital TV picture when transmitted on the air will require about the same bandwidth as an analogue transmission.
 
  • #38
In the case of conventional TV you are ‘nearly’ right about there being no sampling on a line but the vertical dimension certainly is sampled. Any other form of scanning would involve both H and V explicit sampling.
Forget Shannon as analogue TV is way short of that limit. For a start, it wastes most of its bandwidth sending most of most pictures again and again. The law at work there is the ‘getting it done somehow’ law. There are many lossless methods of compression which, given an appropriate processing delay, can give moving pictures with the only shortcomings being in the original analogue signal (or sensor limitations). Shannon does not specify processing time either.

The vertical / horizontal bandwidth issue is not straightforward. The line rate is inversely related to the horizontal resolution for a given channel bandwidth. The choices in existing systems are only approximately optimal.
 
  • #39
tech99 said:
I think a full quality digital TV picture when transmitted on the air will require about the same bandwidth as an analogue transmission.
The bandwidth required for uncompressed 'raw' data with the nominal resolution of usual analog signal would be far bigger. For practical reasons the actual bandwidth requirement is kept ~ the same but the data got digitally compressed. At the end it still comes with higher quality.
 
  • #40
Rive said:
The bandwidth required for uncompressed 'raw' data with the nominal resolution of usual analog signal would be far bigger. For practical reasons the actual bandwidth requirement is kept ~ the same but the data got digitally compressed. At the end it still comes with higher quality.
It's not really valid to compare analogue and digital TV because, firstly, no one needs to transmit the amount of repeated information in most TV programme material. Our brains could actually not cope with separate full res 625 line pictures, appearing 25 times a second. Secondly the data rate needed is not really comparable with an analogue bandwidth. You need to do the whole calculation which has to compare minimal channel loss / signal strength and then relate analogue picture to noise ratio to error rates and how the coding and transmission methods deal with them.
The proof of the pudding is that the four / five channel TV service on UHF in UK has been replaced by the 70 standard and 15 HD channels on Freeview in the same UHF spectrum space.
 
  • #41
sophiecentaur said:
The proof of the pudding is that the four / five channel TV service on UHF in UK has been replaced by the 70 standard and 15 HD channels on Freeview in the same UHF spectrum space.
At the same time the multi-path ghosts have been exorcised.
I wonder what a zig-zag scan would look like with multi-path ghosting.
 
  • #42
Two aretefacts for the price of one, probably. But echos would tend to be broken up. One plus point to zig zag but several minuses I expect.
 
Last edited:
  • #43
@tech99 I think if digital radiowave broadcast was done like analogue was done where each frame was sent as a separate "new" frame of information( where information largely overlapped as @sophiecentaur already pointed out) then probably the digital bandwidth would compare to the analogue, but if I'm not mistaken then since the large scale introduction of digital broadcasting and equipment we rely on the fact that more modern TV's had built in circuitry with memory and signal processing so we can now send just the pixels that have changed for a new frame while the previous ones remain displayed from the memory.

As I read even newer technology uses the so called "AI" software which essentially can "fill in" missing parts of the frame and pixels by continually using a powerful processor to sample the incoming frames and find the most appropriate fill in's for the missing parts from it's memory.

I guess modern flat screens are more like computers with a digital radiowave input than actual TV's in their classical sense.
They have all the computer parts like RAM, CPU, GPU , the only difference between a desktop PC or a laptop is the medium through which the info is sent , internet cable VS radiowaves, but then again for laptops WIFI is just another form of radiowave broadcast.
 
  • #44
artis said:
probably the digital bandwidth would compare to the analogue
That would be a pretty inefficient digital transmission system for these days. You are assuming what would be a basic binary signalling system which has a bandwidth of the order of the 'Baud Rate'. Take WiFi, for instance, which is a pretty representative system. It uses an RF channel bandwidth of around 20MHz but can give data rates of over 200Mb/s with QAM.

I know it's more complicated than that because range and coverage need to be considered but there is no simple rule of thumb these days, like there was when we sent a binary data stream down a wire. It amazes me that I can usually get 70Mb/s along a couple of hundred metres of what was installed as telephone (naff audio quality) cable.
 
  • #45
I built a display like this back in the early 1980's. (Probably still have the prototype somewhere...) It was intended for a computer graphics system, so all of the transmission and compatibility issues others have mentioned were not a concern in this case. The primary motivation was, as you suggest, to not waste something like 10% of the frame time in retrace.

Since there was no compensating scan in an image tube originating the video, the line pairing at the edges was indeed an issue. To address this, we stepped the vertical scan. In order to get adequate step response, we had to have a relatively low inductance vertical deflection yoke driven by a high-bandwidth amplifier. There was an issue with horizontal errors, as someone mentioned. The primary cause of this was magnetic hysteresis in the ferrite core of the horizontal deflection yoke. Since I was generating the video electronically I could compensate for this fairly well, though it might still have been something of a problem in production due to temperature and part-to-part variations. It never made it into production, though. Ultimately, the reduction in required video bandwidth was not worth the extra power in the amplifiers and other drawbacks. Still, it was a great experiment.

An interesting related issue was that at the time, horizontal output transistors in standard horizontal scan circuits were NPN bipolar transistors operated as saturated switches, and woe betide you if your HOT ever came out of saturation during the flyback pulse! To ensure this never happened, and because of large variations in the transistor's beta due to temperature and lot variations, you had to make sure you had lots of base drive to cover the worst case with plenty of margin. This, of course, means not only a large storage time, but a wide variation in storage time across temperature and parts. This uncertainty meant you had to allocate extra time for horizontal retrace. To reduce this, I built a circuit that servoed the retrace pulse to the H Synce pulse. This worked quite well, though again, I don't think we ever put it into production.

By the way, I suspect just about every variation on CRT scanning has been tried at some point. I remember an article on a system that used a spiral scan, from the center outward I think. I don't know why. I also had an alphanumeric display that did a sort of mini-raster along each row of characters (fast vertical, slow horizontal as I recall), but only as far as the characters went in each row. Seems like a small jump from there to a full XY vector display.

Fun times.
 
  • Like
  • Informative
Likes KalleMP, DaveE, Keith_McClary and 1 other person
  • #46
Thank you Entropivore. I also remember an interesting slow scan format before the real digital era where there was a long persistence tube and the pixels were updated randomly.
 
  • #47
Using a scanned electron beam must limit the possibilities of reading out and displaying the image pixels. That beam and the scanning circuits effectively have a large amount of 'momentum' due to the inductance of the coils. That's no longer a necessary factor in the way the picture elements are transmitted.

CCD imaging is currently based on sequential access to the elements in picture lines but the vertical sequence in which the lines are read out need not, I think, require a particular order of data. In fact, how important is it even that the charge coupled elements would actually need to be in a line? Ever since an electron beam has no longer been used, the whole notion of scanning 'as such' is less relevant to reading out the image information.

The discussions in this thread will be less and less relevant once random access to the pixels of images is possible. I have a feeling that CMOS would allow more flexibility of the data flow from the sensor elements.
 
  • #48
entropivore said:
I built a display like this back in the early 1980's. (Probably still have the prototype somewhere...) It was intended for a computer graphics system, so all of the transmission and compatibility issues others have mentioned were not a concern in this case. The primary motivation was, as you suggest, to not waste something like 10% of the frame time in retrace.

Since there was no compensating scan in an image tube originating the video, the line pairing at the edges was indeed an issue. To address this, we stepped the vertical scan. In order to get adequate step response, we had to have a relatively low inductance vertical deflection yoke driven by a high-bandwidth amplifier. There was an issue with horizontal errors, as someone mentioned. The primary cause of this was magnetic hysteresis in the ferrite core of the horizontal deflection yoke. Since I was generating the video electronically I could compensate for this fairly well, though it might still have been something of a problem in production due to temperature and part-to-part variations. It never made it into production, though. Ultimately, the reduction in required video bandwidth was not worth the extra power in the amplifiers and other drawbacks. Still, it was a great experiment.

An interesting related issue was that at the time, horizontal output transistors in standard horizontal scan circuits were NPN bipolar transistors operated as saturated switches, and woe betide you if your HOT ever came out of saturation during the flyback pulse! To ensure this never happened, and because of large variations in the transistor's beta due to temperature and lot variations, you had to make sure you had lots of base drive to cover the worst case with plenty of margin. This, of course, means not only a large storage time, but a wide variation in storage time across temperature and parts. This uncertainty meant you had to allocate extra time for horizontal retrace. To reduce this, I built a circuit that servoed the retrace pulse to the H Synce pulse. This worked quite well, though again, I don't think we ever put it into production.

By the way, I suspect just about every variation on CRT scanning has been tried at some point. I remember an article on a system that used a spiral scan, from the center outward I think. I don't know why. I also had an alphanumeric display that did a sort of mini-raster along each row of characters (fast vertical, slow horizontal as I recall), but only as far as the characters went in each row. Seems like a small jump from there to a full XY vector display.

Fun times.
I misspoke slightly regarding blowing up horizontal output transistors. The flyback actually occurs when you turn off the transistor, and the real problem is running out of base drive when you reach the peak current at the end of the sweep. Of course, trying to turn the output transistor back on during the flyback pulse will also cause grief, but that's not so likely to happen. Except... I think it was the Commodore PET that had a monitor where rather than having a horizontal oscillator that was synchronized to pulses from the video circuitry, in essence software directly drove the horizontal output stage, so you could in fact smoke the hardware through a programming error. Clever, eh?
 
  • #49
sophiecentaur said:
Using a scanned electron beam must limit the possibilities of reading out and displaying the image pixels. That beam and the scanning circuits effectively have a large amount of 'momentum' due to the inductance of the coils. That's no longer a necessary factor in the way the picture elements are transmitted.

CCD imaging is currently based on sequential access to the elements in picture lines but the vertical sequence in which the lines are read out need not, I think, require a particular order of data. In fact, how important is it even that the charge coupled elements would actually need to be in a line? Ever since an electron beam has no longer been used, the whole notion of scanning 'as such' is less relevant to reading out the image information.

The discussions in this thread will be less and less relevant once random access to the pixels of images is possible. I have a feeling that CMOS would allow more flexibility of the data flow from the sensor elements.
It's true that the display scanning need no longer define the the imager scanning (or vice versa), but there are other architectural issues at play. Interconnect limitations dictate that imager data is going to have to be serialized in some way, and if you want the full imager frame you might as well do that in a simple pattern. Some existing CMOS sensors provide the ability to shift out only a sub-region of the full array, which can be useful for machine vision applications such as target tracking but is less interesting for general photography and video applications.

Also, our compression standards (e.g., MPEG, H.264) have evolved around processing pixels in a predictable sequence. You could in principle build data compression into the imager itself, but there are architectural and economic arguments against that as well, so it probably only makes sense in a limited set of conditions.

Re your question about whether the imager elements need to even be in a line, consider that even though imaging and display are now greatly decoupled, in order to make sense of an image one still has to know the spatial organization of the original sampling points. Given this, it makes sense to standardize on a single pattern so as to provide interoperability across sensors and systems. (If you've ever had to deal with "non-square" pixels in an imager or display you'll probably know what a headache it can be.) I'm not sure to what degree the evolution of fabrication technologies influences this, but I suspect it may also come into play. That is to say, they seem to be optimized for rectilinear structures, so uniform XY grids are a natural choice. Ultimately, economics is a major driver of the evolution of technology.
 
  • #50
sophiecentaur said:
The discussions in this thread will be less and less relevant once random access to the pixels of images is possible. I have a feeling that CMOS would allow more flexibility of the data flow from the sensor elements.
Well if what I understand about active matrix TFT is correct then the vertical scan rate is the rate at which each pixel row gate's are switched on/off (each row gate's connected together) and this is matched by the vertical data bus (mosfet drain line, individual for each column) that then drives each pixel capacitance either on/off based on it's square wave.
So with this one can have all pixels within a single horizontal row be controlled simultaneously and independently. I can't see how one could do this for the whole screen instead of single row at a time as that would require additional layers of wires on the TFT matrix so that any sub pixel of all the matrix can be controlled independently , It seems unrealistic , also the driving cirucitry would probably have to be orders of magnitude more complex, is there something I'm missing ?
 
  • #51
sophiecentaur said:
In fact, how important is it even that the charge coupled elements would actually need to be in a line?
We have a huge pile of image math based on the line&raster system. As a storage and transfer method, I don't think it'll ever fundamentally change.

sophiecentaur said:
The discussions in this thread will be less and less relevant once random access to the pixels of images is possible. I have a feeling that CMOS would allow more flexibility of the data flow from the sensor elements.
With the rapid development of IC technology, it is indeed possible to assign sophisticated circuitry to every pixel, and this opens up many new possibilities regarding (colour and movement) dynamics and on-the-fly processing.
On the other hand, if you take a photo it is still expected to have all the field of vision captured.
 
  • #52
I think it was the Commodore PET that had a monitor where rather than having a horizontal oscillator that was synchronized to pulses from the video circuitry, in essence software directly drove the horizontal output stage, so you could in fact smoke the hardware through a programming error. Clever, eh?
That was implemented in the early IBM PC desktop computers in the late 1970's. There was no possibility of troubleshooting the display without it being connected to their computer. With no horizontal oscillator, all the magic smoke would escape from the horizontal output stage! :rolleyes:
 
  • #53
entropivore said:
Some existing CMOS sensors provide the ability to shift out only a sub-region of the full array, which can be useful for machine vision applications such as target tracking but is less interesting for general photography and video applications.
I am really not up to date with details like that but is sort of makes my point that line scanning is not a 'given'.
entropivore said:
Given this, it makes sense to standardize on a single pattern so as to provide interoperability across sensors and systems.
With the present state of things you are right. But the 'readout' sequence could be varied to suit the particular image (image sequence) in an intelligent way. That sequence could be sent to the decoder.
artis said:
Well if what I understand about active matrix TFT is correct then the vertical scan rate is the rate at which each pixel row gate's are switched on/off (each row gate's connected together) and this is matched by the vertical data bus (mosfet drain line, individual for each column) that then drives each pixel capacitance either on/off based on it's square wave.
In the end, it's a matter of achievable data handling speeds and we will surely do a lot better than what you are describing. If the image sensor readout were pixel orientated, it could be treated as a random access memory and a more intelligent coding processor could assemble the optimum image (sequence) data. The possibility of getting better quality images will rely on intelligent systems which present the data best to the human eye / brain.

The motion aspect of image sensing tends to be ignored in many of these discussions. Even going back to 'old fashioned TV', the practice of having interlaced fields in old fashioned TV was all about reducing the jerkiness of 25 / 30 Hz frame rate by doubling the temporal sampling rate.
 
  • #54
Well I agree @sophiecentaur that if one had the option of changing individual pixels all across the screen all at once and do so at the same rate the real camera captured pixels change due to light changes then I guess we could forget the term "frame rate". The picture would be much smoother but then again the question is how fast can we change individual pixels at once and does that match up the the most challenging videos captured
 
  • #55
artis said:
does that match up the the most challenging videos captured
That would take us into the same problems that the developers of Mpeg have encountered; matching the channel to the subject materials and psychology. It would be more along the lines of the way human vision works. But there can be no doubt that we have to assume that a scanning system, based on revolving drums or deflecting electron beams, can be improved on significantly.

I would say we're not even half way there.
 
  • #56
sophiecentaur said:
It would be more along the lines of the way human vision works.
Yes that also came to my mind , how exactly human vision works with regards to this and I'm pretty sure we don't have horizontal row scan and vertical column scan rates, but I don't want to derail this otherwise good thread so I guess one would need to make another one for that.
 
  • Like
Likes sophiecentaur
  • #57
artis said:
I'm pretty sure we don't have horizontal row scan and vertical column scan rates,
It was very much the tail wagging the dog. The tail was what we could do at the time and the rest of the TV system followed.

The way we see things is so strange. I remember the head of our School Art Department covering my Science lesson on the last (fun) day of Christmas term. She was indulging the kids by drawing portraits of a few of them. Her method was to start top left of the A4 paper and more or less produce the picture as if she was writing / scanning it in large type. When she got to the bottom right, the picture was done. I was gobsmacked that the sketches were all good likenesses and it set me thinking about what she was actually doing in producing the likenesses in that way.
 
Last edited:
  • Informative
Likes hutchphd
  • #58
sophiecentaur said:
That would take us into the same problems that the developers of Mpeg have encountered; matching the channel to the subject materials and psychology. It would be more along the lines of the way human vision works. But there can be no doubt that we have to assume that a scanning system, based on revolving drums or deflecting electron beams, can be improved on significantly.

I would say we're not even half way there.
When you say we're not even half way there, it's not clear to me where "there" is, or (with apologies to Gertrude Stein) if there's even a there there. To put it another way, what problem are you really trying to solve?

One can certainly imagine an imager built on top of some sort of neurmorphic substrate with a massively parallel interconnect. This might be very useful for applications like missile guidance or other machine vision problems, or in building something like an artificial retina. For the larger set of applications, though, data acquired by an imager need to be conveyed in fairly raw form to a physically distinct entity, and that degree of parallelism isn't an option. So, we're stuck with some form of serialized transmission. If the addressing sequence is not defined in advance, that is, if the pixels are sent in apparently random order, then the overhead of sending addresses along with the pixel data becomes a heavy burden. If you had a 4k x 4k imager you'd need 24 bits for each address. If you have 24 bits of pixel data, then you've just doubled the bandwidth requirement, but to what end?

One of the benefits of raster scanning is that it preserves locality of reference. This means that you can do on-the-fly processing without assembling full frames, which is advantageous in terms of latency and storage requirements. Running a filter over rasterized pixel sequence requires storing only a few lines worth of pixel data. On a random sequence, it would require storing the entire frame to be sure you had all of the pixels for each iteration of the filter.

Unlike the early days of television, it is a rare case nowadays that images are conveyed from a sensor to a display without alteration. Let's say you have some non-raster sequence that you've determined is optimal for extracting data from the sensor. How would you composite that stream with another one, which I guess would have a completely different address sequence? Even if the two streams had the same address sequence, would that still be the optimal sequence for the composite result? What happens after you've applied scaling or other transforms?

Perhaps I'm missing your point. Is your idea to do away with the entire notion of video as a sequence of frames? I can imagine this in some sort of special case with a one-to-one mapping between a sensor and a display, analogous to a coherent fiber optic bundle, for example. How this would work in a more general case is much less clear to me. Disregarding the rather onerous addressing overhead mentioned above, I can sort of see how you might do compositing and spatial transforms, but it would seem to break anything that relies on locality of reference, such as spatial filtering. (I haven't even begun to try to get my head around temporal filtering in such a system.)

Bear in mind that of all the pixels in the universe, a significant (and rapidly increasing, I expect) portion of those captured by imagers are never displayed for human eyes, and likewise many of those displayed for human viewing never originated from real-world image capture. Coming up with an entirely new video paradigm that is optimized for direct sensor-to-display architectures seems like a misdirected effort.
 
  • Informative
  • Like
Likes hutchphd and DaveE
  • #59
entropivore said:
When you say we're not even half way there, it's not clear to me where "there" is, or (with apologies to Gertrude Stein) if there's even a there there. To put it another way, what problem are you really trying to solve?

One can certainly imagine an imager built on top of some sort of neurmorphic substrate with a massively parallel interconnect. This might be very useful for applications like missile guidance or other machine vision problems, or in building something like an artificial retina. For the larger set of applications, though, data acquired by an imager need to be conveyed in fairly raw form to a physically distinct entity, and that degree of parallelism isn't an option. So, we're stuck with some form of serialized transmission. If the addressing sequence is not defined in advance, that is, if the pixels are sent in apparently random order, then the overhead of sending addresses along with the pixel data becomes a heavy burden. If you had a 4k x 4k imager you'd need 24 bits for each address. If you have 24 bits of pixel data, then you've just doubled the bandwidth requirement, but to what end?

One of the benefits of raster scanning is that it preserves locality of reference. This means that you can do on-the-fly processing without assembling full frames, which is advantageous in terms of latency and storage requirements. Running a filter over rasterized pixel sequence requires storing only a few lines worth of pixel data. On a random sequence, it would require storing the entire frame to be sure you had all of the pixels for each iteration of the filter.

Unlike the early days of television, it is a rare case nowadays that images are conveyed from a sensor to a display without alteration. Let's say you have some non-raster sequence that you've determined is optimal for extracting data from the sensor. How would you composite that stream with another one, which I guess would have a completely different address sequence? Even if the two streams had the same address sequence, would that still be the optimal sequence for the composite result? What happens after you've applied scaling or other transforms?

Perhaps I'm missing your point. Is your idea to do away with the entire notion of video as a sequence of frames? I can imagine this in some sort of special case with a one-to-one mapping between a sensor and a display, analogous to a coherent fiber optic bundle, for example. How this would work in a more general case is much less clear to me. Disregarding the rather onerous addressing overhead mentioned above, I can sort of see how you might do compositing and spatial transforms, but it would seem to break anything that relies on locality of reference, such as spatial filtering. (I haven't even begun to try to get my head around temporal filtering in such a system.)

Bear in mind that of all the pixels in the universe, a significant (and rapidly increasing, I expect) portion of those captured by imagers are never displayed for human eyes, and likewise many of those displayed for human viewing never originated from real-world image capture. Coming up with an entirely new video paradigm that is optimized for direct sensor-to-display architectures seems like a misdirected effort.
As I mentioned previously, a long persistence CRT can do individual pixel scanning by driving X and Y plates with suitable noise-like waveforms.
 
  • #60
tech99 said:
As I mentioned previously, a long persistence CRT can do individual pixel scanning by driving X and Y plates with suitable noise-like waveforms.
Certainly. The DEC 338 and 339 displays were classic examples of this. (Remember light pens?) Genisco even built a 3D display by combining a vector display and an oscillating mirror. (Look up the Genisco "Spacegraph".) But there's a reason why vector displays are now essentially historical artifacts and computer display technology and video display technology have converged.
 
  • #61
Well as far as I'm aware we also don't have camera sensors that could catch all of the light passed onto them at every instant at once.
As for screens not sure whether that is technically doable because for a TFT screen that would need that each subpixel transistor has individual access wires to it's gate, drain while only the source could be left common for all the pixel transistors. This would take up much more of the space otherwise reserved for light to pass through from behind.
But then I got thinking about OLED display and maybe one could implement this for them because in OLED the very subpixel itself is the light emission source so no need for transparency like in TFT so technically you can cover the backside of the pixel matrix with as many wires/ wire mesh as you like or is technically possible.
Then there is the question of how fast could you possibly drive them in a demanding video.
Although on a second thought the driving speed shouldn't be that high because in any video the actual speed with which pixels change their color is not that high.
As of now an average TFT panel is driven in the Mhz as far as I know and that is because you have to assemble say 50 frames per second where every frame you have to drive through it row by row but in a rasterless scanless method I think on an average video you could actually have lower overall panel drive frequencies than currently if you had access to each individual pixel. Then you don't need to rush as fast in order to get to every pixel in time...Would I be correct in saying that the main problem here is not the speed/frequency but getting the physical layout where every pixel is individually controllable and then have a drive circuit that is capable of managing so many pixels all at once.
 
  • #62
As far as I'm concerned this kind of image capturing and processing likely will be about interfacing some kind of neural network: either biological or artificial.
And as long as our inbuilt 'instruments' we born with can provide satisfying performance in interacting with the usual 2X2D image we perceive I do not expect to have the current situation/way of image capture/transmit/display changed.

The special areas/cases would be sufficient to drive this direction of development forward, though.
 
  • #63
entropivore said:
if there's even a there there.
I'll let you know, when we get there.
entropivore said:
if the pixels are sent in apparently random order, then the overhead of sending addresses along with the pixel data becomes a heavy burden.
Every quantity of data is accompanied by its coding rules. The simplest system (e.g. old TV) uses coding rules that only change when the system is changed; that's bad value but easy to engineer. Beyond that there is always some header information which deals with the coding and error reduction. Any system that's worth its salt will use less overhead data than it saves.
Rive said:
the usual 2X2D image we perceive
We may be shown a 2D image (in fact, it's a sequence of images) but we perceive much more than that. You are ignoring the motion portrayal and the way our brains 'remember' what's behind that car which just parked there. Also, we used to watch many "satisfying" performances on 405 line B/W telly and people often say the pictures are better on Radio.
 
  • #64
sophiecentaur said:
people often say the pictures are better on Radio.
Even better in books. The thing that's similar between picture books and those with words are that when you start reading them their all picture books.

Anyway , data wise I don't think transmitting a 4k image where each pixel is sent at the instant it changes in the camera (sort of like in the old school TV) is that difficult , given we now have AI software that can basically given a big database recover object forms, sizes and colors from a blurred out/faulty image we could essentially have instant image transmission just in "bad quality" whereby instead of sending each subpixel you send a larger block of the screen as one data unit and then in the end the software sees the "square bullet" and rounds it off, so in a sense even though you are sending live image as it changes all at once you are just making it have lower resolution and then gain back the missing resolution at the end. Sort of like recording a live concert and transmitting it in mp3 or some other "squeezed" format then restoring it back in the end.

Although I'm not sure how much can you "chirp off" at the original end and still recover it at the user end.
But given the user end would have sophisticated software I assume quite a lot and still qualify as real time no raster/scan image.
 
  • #65
artis said:
Well as far as I'm aware we also don't have camera sensors that could catch all of the light passed onto them at every instant at once.
I don"t know what you you mean by this. Can you elucidate your meaning? Do you mean different colors?
 
  • #66
artis said:
Anyway , data wise I don't think transmitting a 4k image where each pixel is sent at the instant it changes in the camera (sort of like in the old school TV) is that difficult ,
I can't make sense of this. Even if you had 4K X 16bit data transmitted in parallel, the time occupied by each clock pulse and the detection time would be significant. And I think you are forgetting that still images are very often of very limited use so you would still need to take some time (even if compressed) to send a few seconds' worth of a movie.
Wherever you turn, the fundamental limitations of bandwidth and noise are always there. In the camera, each exposure takes a finite time so the output rate is limited - for each pixel, however the data is arranged and transmitted. "Old school" TV is incredibly wasteful because it sends the same information every frame. You could sometimes save a lot of capacity by sending a message "one hour of test card F" (23 ASCII characters per hour).To make good use of a channel, the sequence of scanned images needs to be analysed to find the amount of actual information that's needed. To do this on the fly (one frame at a time) is fast and wasteful. If you are prepared to introduce a delay in transmission, you could examine a sliding time window of, say 1s or 100s and make sure you send as little repeated data as possible.
None of the above is relevant to a scanned or random access imaging; if you can extract the actual picture content then you can send it and display it in any way you want. We already standards convert to suit our device screen and resolution.
 
  • Informative
Likes nsaspook
  • #67
hutchphd said:
I don"t know what you you mean by this. Can you elucidate your meaning? Do you mean different colors?
sophiecentaur said:
I can't make sense of this. Even if you had 4K X 16bit data transmitted in parallel, the time occupied by each clock pulse and the detection time would be significant. And I think you are forgetting that still images are very often of very limited use so you would still need to take some time (even if compressed) to send a few seconds' worth of a movie.
Wherever you turn, the fundamental limitations of bandwidth and noise are always there. In the camera, each exposure takes a finite time so the output rate is limited - for each pixel,
My meaning was this. All known digital image sensors like CCD for example are not continuous , they instead have an exposure time and a transfer time/shutter time. The faster ones like "frame transfer" and "interline transfer" are simply very fast but they still have a "dead time" aka a time when the MOS structure doesn't accept photoelectrons and is instead configured by the gate electrodes to move the lines of pixel charges into serial register for outputting.

Now not considering crosstalk, bandwidth , or any other problems what I was thinking was more like an analog capture frame, where each pixel instead of being charged/moved/discharged is constantly on. Much like a diode but with no voltage drop. So that each pixel the moment it is hit with a photon outputs a signal that is proportional to the number of photons that land on the pixel. The same is true for CCD's for example where each pixel charge is the representative of the number of photons that hit the pixel during exposure time the difference would be that instead of that charge then being moved/read out it is read out continually with no delay. This would most likely necessitate an analog approach.

The problem of curse is that even if one manages to build a pixel frame where each pixel can be continually read out it would still most likely ask for some sort of data management and possibly ADC down the road as I cannot image how one could transmit 4k pixels continuously.

Or you could still read the entire pixel matrix continually and then ADC each one and end up with a gigantic amount of continuous digital data representing the matrix at each instant, so in the end it would still not be continuous but the frame rate could be kicked sky high as the screen would not be made from one scan line at a time but instead all lines at once and then as fast as one can remake those lines in a second.Another way to do this would be with representing each pixel with a photodiode and then transmit the whole frame optically but that again leads to pretty much the previously mentioned problems.
But it is a interesting though puzzle.
 
  • #68
Comparing this idea to the old school CRT method it would be similar to having a tube not of one electron gun with a high voltage and deflection but instead having a multitude of small guns close to each pixel/sub pixel working not in a raster scan fashion but instead continually illuminating each pixel or not illuminating - during dark moments. In this regard the whole screen would not be divided into frames but instead lit continually while the brightness of individual subpixels/pixels changes due to motion of the video.I think the closest we have ever come to anything like this is plasma screens but IIRC they too are scanned instead of continually changing each pixel brightness, although in theory they could if the control circuitry and signal processing was up to task with that.

This approach I believe would make a video seem 100% natural as that is how we perceive stuff in nature and how light/vision works naturally whereby the light source (sun for example) is continuous and any change in the light reflected by a moving body for example is also continuous and not framed or scanned.
Best things in life are all analog I guess...
 
  • #69
I don't understand your point. The front end of most existing solid state image sensors is essentially one analog photodiode (sometimes a phototransistor) per pixel. The quantum efficiency can be quite high (so in that sense it is "digital") and the "down time" is minimal.
How the resultant electrons are then handled depends upon the device but typically they ~continuously charge a capacitor that is read out in a variety of clever ways. Light energy is not squandered.
So what are you attempting to improve?
 
  • #70
artis said:
My meaning was this. All known digital image sensors like CCD for example are not continuous , they instead have an exposure time and a transfer time/shutter time. The faster ones like "frame transfer" and "interline transfer" are simply very fast but they still have a "dead time" aka a time when the MOS structure doesn't accept photoelectrons and is instead configured by the gate electrodes to move the lines of pixel charges into serial register for outputting.
I interpret this a meaning that you appreciate that all such systems are sampled. Yes they are, because, with the exception of analogue sound, every form of transmission or recording involves sampling - whether explicit or not.
artis said:
Another way to do this would be with representing each pixel with a photodiode and then transmit the whole frame optically but that again leads to pretty much the previously mentioned problems.
But it is a interesting though puzzle.
But even this is a sampled system (spatially, in pixels). Transmitting 4k separate analogue channels would be a pointless extreme. Every information channel is limited in power, bandwidth and space. By space, I mean available signal pathways. Bandwidth and space can be described to gather in terms of bandwidth - i.e. two channels of a given data rate are the equivalent to a single channel of twice the data rate and the bottleneck is always in the channel bandwidth (or the equivalent for recording media).
It's already been found that a choice of efficient coding always involves finding as much about the psychology of human perception. The research that produced Mpeg, in all its versions, involved a lot of subjective testing to get the most possible juice out of the lemon but such systems are stuck with backwards compatibility problems. One advantage we do have is that processing power is still going up and up so we can look forward to better and better experiences of image and sound transmission.
 

Similar threads

Back
Top