How Safe Are Self-Driving Cars After the First Fatal Accident?

  • Thread starter Dr. Courtney
  • Start date
  • Tags
    Car Self
In summary, a self-driving Uber car struck and killed a pedestrian in Arizona. The small experimental installed base of self-driving cars raises concerns with the technology, and the tragedy will be scrutinized like no other autonomous vehicle interaction in the past.
  • #106
Ryan_m_b said:
Also of interest is this footage of the investigators looking into the accident. Using the same SUV involved in the crash they can be seen driving that same stretch of road at speed and attempting to break before reaching the point Elaine was standing. From the video it seems there was more than enough space to break in time.

https://twitter.com/LaurenReimerTV/status/977077647543955458

Tentatively my thoughts that this is a failure of two parts (one the car not responding and two the driver not overriding the car) are looking to be correct.

I would say the video above only proves human response time in a known stopping condition. It has questionable usefulness for what actually happened even if a human was driving. I'm not exactly sure what the technical basis for over-driving headlights would be here. 40 MPH on good low beams still gives plenty of time to brake with a perfect human response. I also don't know the legal limitations of over-driving headlights because I don't see the speed limit enforced to 40 MPH on roads with a 70+ MPH limit at night even with marginal lighting.

IMO the poor homeless lady specifically chose that crossing location for a 'stealth' crossing because it's just beyond a street light boundary that could have been just as easily chosen. She wore dark non-reflective clothing with no bike reflectors to reduce her detection cross-section for humans, video or detection systems intentionally so she could hide easily as a homeless person at risk. I'm not blaming her, I'm just saying her actions caused her death.
 
  • Like
Likes berkeman
Physics news on Phys.org
  • #107
russ_watters said:
I was thinking something similar(I mentioned it earlier), but let's not get conspiracytheoryish about this: cameras are inherently inferior to our eyes for this purpose (range of brightness) and proper automatic adjustment is difficult at best. That's why HDR photography was invented. So the quality might be poor, but that is not unusual. The video you linked looks to me like it is using a brightness boosting/ leveling technology.

So my question is; are they using cameras for obstacle avoidance? previously you mentioned LIDAR: was that speculation or do you know it was using lidar?

From the little information I've been able to find the safety driver was looking down at the object detection (that combined all sensors including LiDAR) display on a laptop or similar computer as it was her job to monitor the system.

Her bike crossing timing as a detection target in relationship to the background and angle of approach of the car might have reduced the unique human signature as it blended with a bike with various sizes of plastic bags strung over it. I wonder how many pictures of homeless people walking laden bikes are in the images databases for a high confidence target classification? It's possible the object detection system generated a false negative of a person or bike while classifying it as a more benign object like a slowly moving trash bag near the side of the road until it was too late to stop or avoid.
https://pdfs.semanticscholar.org/cd36/512cbb2701dccda3c79c04e6839d9f95852b.pdf

homeless-man-searching-for-redeemable-containers-in-a-trash-can-3rd-j477yt.jpg
 

Attachments

  • homeless-man-searching-for-redeemable-containers-in-a-trash-can-3rd-j477yt.jpg
    homeless-man-searching-for-redeemable-containers-in-a-trash-can-3rd-j477yt.jpg
    46.3 KB · Views: 271
  • #108
The job of the object detection system when an object appears in front of the car is is to tell the car to stop after all it could be a boulder or a moose.
 
  • #109
gleem said:
The job of the object detection system when an object appears in front of the car is is to tell the car to stop after all it could be a boulder or a moose.

What if it's a wind driven trash-bag or a pages from a newspaper 'flying' across the road and you have a cement truck behind you at 40 MPH? Classification of objects as a boulder or a moose is very important beyond detection of an object because the response should be different for 'benign' objects seen by the detection system. Executing a emergency stop for all objects detected is dangerous too.
 
  • #110
The cement truck is driving too close for conditions. BTW we didn't stop in our car for what appeared to be piece of rubber tire and as it turned out to be something more substantial cracking the differential housing. If the object appears too quickly even a human may initiate an emergency stop or maybe even worse swerve to try and avoid it as we are often instruction not to do as in the case of an animal.
 
  • #111
gleem said:
The cement truck is driving too close for conditions. BTW we didn't stop in our car for what appeared to be piece of rubber tire and as it turned out to be something more substantial cracking the differential housing. If the object appears too quickly even a human may initiate an emergency stop or maybe even worse swerve to try and avoid it as we are often instruction not to do as in the case of an animal.

My point exactly on why classification is important, if the cement truck is driving too close for conditions we need to be sure the risk of an emergency brake sequence is needed. The failure here to be seems to be a classification error because pure detection should have been easy with a functional Lidar system.

These systems are less robust than most people think.
 
Last edited:
  • Like
Likes collinsmark
  • #112
russ_watters said:
The news is calling him a "safety driver". Presumably that means his primary function is preventing just this sort of accident. But until we know what he was doing or what his full job description was, it is difficult to know how much blame he has. If he was on Facebook, then he has considerable fault. if he was performing Uber-assigned systems monitoring, then he has none.
The methodology used here for testing driverless cars is not foolproof.
In fact, consider the following:
1. Premise: the technology in considered mature enough to allow the vehicle to operate in real world situations.
2. if the technology is mature enough then the testing phase is unnecessary
3. a 'safety driver' occupies the vehicle during the unnecessary testing phase
4. the unnecessary testing phase then becomes a test of the actions and responses of the 'safety driver'.
5. since the 'safety driver' is a human, the testing becomes an actual study in human behavior.
 
  • #113
In the aircraft world there is a lot of concern about the lack of manual flying due to excessive use of automation. They think pilots are loosing the skills needed to fly. There have been a few accidents due to pilots failing to respond correctly when the auto pilot suddenly hands back control. Also when the autopilot does the wrong thing as a result of information from faulty sensors.

In one case a radio altimeter failed and indicated -8 feet constantly. The pilots recognised it was faulty and the autopilot appeared to ignore the faulty data and work normally. Then as they came into land the autopilot suddenly decided that -8 feet meant the aircraft must be at the right height to shut off the engines and slow for landing. The aircraft crashed 1km short or the runway.

I own a 10 year old car and bits fail all the time without warning.
 
Last edited:
  • #114
NYT article of interest: https://www.nytimes.com/2018/03/23/technology/uber-self-driving-cars-arizona.html

Also, no one seems to be picking up on the fact that the "safety driver", "operator" or whatever, was a convicted felon who spent time in jail for armed robbery...

Why would Uber hire someone like this to be part of a research project unless they were trying to cut corners financially, and hire people at dirt-cheap wages? This makes me very suspicious of where else they were trying to cut corners...
 
  • #115
dipole said:
NYT article of interest: https://www.nytimes.com/2018/03/23/technology/uber-self-driving-cars-arizona.html

Also, no one seems to be picking up on the fact that the "safety driver", "operator" or whatever, was a convicted felon who spent time in jail for armed robbery...

Why would Uber hire someone like this to be part of a research project unless they were trying to cut corners financially, and hire people at dirt-cheap wages? This makes me very suspicious of where else they were trying to cut corners...

https://www.jailstojobs.org/6017-2/
https://www.uber.com/info/policy/criminal-justice-reform/
 
  • Like
Likes OmCheeto
  • #116
I'm sure there must be some kind of 'black box' in these autodrive cars, just as there are on aircraft.
That means there is actual data which can reveal what the car's system was doing at the time.
If there is not such a black box, then why the hell isn't there?
 
  • #117
Isaac Asimov, where are you in our time of need?

https://en.m.wikipedia.org/wiki/Three_Laws_of_Robotics

Following is a quote from the Wikipedia article. Note particularly #5 - sounds like a good idea.

"In October 2013, Alan Winfield suggested at an EUCog meeting[55] a revised 5 laws that had been published, with commentary, by the EPSRC/AHRC working group in 2010.:[56]

  1. Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security.
  2. Humans, not Robots, are responsible agents. Robots should be designed and operated as far as practicable to comply with existing laws, fundamental rights and freedoms, including privacy.
  3. Robots are products. They should be designed using processes which assure their safety and security.
  4. Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.
  5. The person with legal responsibility for a robot should be attributed."
 
  • #118
256bits said:
The methodology used here for testing driverless cars is not foolproof.
In fact, consider the following:
1. Premise: the technology in considered mature enough to allow the vehicle to operate in real world situations.
2. if the technology is mature enough then the testing phase is unnecessary
3. a 'safety driver' occupies the vehicle during the unnecessary testing phase
4. the unnecessary testing phase then becomes a test of the actions and responses of the 'safety driver'.
5. since the 'safety driver' is a human, the testing becomes an actual study in human behavior.
I don't understand, starting at #2: the "testing phase" happens before the technology is mature. It's what makes the technology mature! The idea that they are testing immature technology on real city streets with no government oversight is both bizarre and scary to me. It sounds like you are suggesting a catch-22 based on a premise that they have to be tested on real city streets before they are ready to be driven on city streets. That just isn't the case and shouldn't be acceptable (gotta break a few eggs? Only if they are fake eggs). See:
The ride-hailing giant published a new video earlier this month showing a glimpse of its fake city where the company's robocars learn how to drive in the real world.

Called Almono, the fake city is built on an old steel mill site along the Monongahela River in the Hazelwood neighborhood of Pittsburgh. It has a giant roundabout, fake cars, and roaming mannequins that jump out into the street without warning. [emphasis added]
http://www.businessinsider.com/ubers-fake-city-pittsburgh-self-driving-cars-2017-10

There is no excuse for Uber's car to not be able to handle such a straightforward/common accident scenario. To me, this is a homicide case. And to me, Tesla's fatal accident wasn't far behind this.
 
  • #119
russ_watters said:
I was thinking something similar(I mentioned it earlier), but let's not get conspiracytheoryish about this: cameras are inherently inferior to our eyes for this purpose (range of brightness) and proper automatic adjustment is difficult at best. That's why HDR photography was invented. So the quality might be poor, but that is not unusual. The video you linked looks to me like it is using a brightness boosting/ leveling technology.
I don't believe it's actually true that modern cameras are inferior to the human eye for dynamic range. Automatic adjustment when you have oncoming headlights mixed with darkness is a difficulty, however. Also, some kind of thresholding is probably necessary a fault since lots of shadow noise probably won't help your detection/tracking algorithms.

So my question is; are they using cameras for obstacle avoidance? previously you mentioned LIDAR: was that speculation or do you know it was using lidar?
More broadly, it's hard to speculate usefully without a very good idea of what the sensors were receiving at the time.
 
  • Like
Likes nsaspook
  • #120
sandy stone said:
Isaac Asimov, where are you in our time of need?
To be honest, I don't see any of this as a "need" - with the exception of #1 which is a political question, these are already legal realities. Robots don't change anything that would cause these to be needed to be said. And they won't until/unless they become legally recognized sentient AI.
 
  • #121
russ_watters said:
There is no excuse for Uber's car to not be able to handle such a straightforward/common accident scenario. To me, this is a homicide case. And to me, Tesla's fatal accident wasn't far behind this.

I completely agree this is a homicide case where we (the police and legal system) decide responsibility for the death.

Is there evidence that 'your 'driving directly contributed to the death of the individual and was the standard of your driving below or well below the standard of what is expected from a reasonable driver in those circumstances. Evidence that suggests the pedestrian stepped/moved/was detected in front of you in conditions leaving you little or no chance to avoid or stop must be given great weight on who to blame.

A similar case with no automation.
http://www.daily-chronicle.com/2018/03/04/genoa-pedestrian-39-hit-and-killed-on-route-72/amriqy8/
Smith said his department was called about 7:35 p.m. to the scene of the crash, where officers determined Michael J. Price, of the 800 block of Wilshire Drive, was crossing the road when Cindy P. Napiorkowski, 38, also of Genoa, was driving west and couldn't stop before hitting Price.

Smith said two members of the DeKalb County Coroner's Office were at the scene, where the man was pronounced dead.

Smith said the man was not crossing at an intersection.

"It isn't very well-lit where he was crossing," Smith said.

He said no citations were issued, the weather was clear, and no foul play is suspected.
 
  • #122
olivermsun said:
I don't believe it's actually true that modern cameras are inferior to the human eye for dynamic range.
I'm not trying to be condescending here, but do you do much photography beyond just basic snapping? It's a huge and well known problem. Take a photo outside of something backlit - you'll notice the subject is near totally black in the photo yet you have no trouble seeing them yourself. It's why people use flashes and professionals use massive reflectors to illuminate their subjects outside -- and that's during the daytime!
Automatic adjustment when you have oncoming headlights mixed with darkness is a difficulty, however.
Yes, and in this case, the AI has to decide which it wants to see well: the brightly lit area near the car or the darker area further away and adjust accordingly.

Note though that this part of our discussion probably doesn't matter much. As far as we know, this was a dashcam, and not part of the car's self-driving system.
 
  • Like
Likes nsaspook
  • #123
Ryan_m_b said:
Not only did the vehicle have LIDAR and Radar but the maker of the LIDAR system has come out and said they can't understand how she wouldn't have been detected by it.
Ok, I know that, but what I asked is whether you know it was using the LIDAR. I'll take that as a no. I'm suggesting that because this seems like an easy accident to avoid for LIDAR, perhaps the car wasn't using it but was rather using/testing another system.
 
  • #124
russ_watters said:
I'm not trying to be condescending here, but do you do much photography beyond just basic snapping? It's a huge and well known problem.
No offense, but you are being a little condescending here. I've been a reasonably serious photographer for a few decades now, including paid work for some years. I'm professionally interested in sensor design and data processing, so if you have some better sources I am all ears (eyes?).

Take a photo outside of something backlit - you'll notice the subject is near totally black in the photo yet you have no trouble seeing them yourself. It's why people use flashes and professionals use massive reflectors to illuminate their subjects outside -- and that's during the daytime!
A big part of that problem is metering. Use a modern digital sensor and "develop" the raw data correctly, you will realize that you have more dynamic range available than your output medium is capable of displaying. Hence the old -1.7 stop fill flash trick is much less a "thing" now than it was in the slide film days. :wink:
 
  • #125
nsaspook said:
From the little information I've been able to find the safety driver was looking down at the object detection (that combined all sensors including LiDAR) display on a laptop or similar computer as it was her job to monitor the system.
("his") Yes, that's what I was getting at. I'll add a caveat to my previous statement though: the law may dictate that the person in the drivers' seat is legally responsible. Otherwise, it's Uber who set him up with a task to complete that took his attention away from being a true safety back-up system.
Her bike crossing timing as a detection target in relationship to the background and angle of approach of the car might have reduced the unique human signature as it blended with a bike with various sizes of plastic bags strung over it. I wonder how many pictures of homeless people walking laden bikes are in the images databases for a high confidence target classification?
I agree with @gleem that the job of the AI is to avoid collisions with objects and it doesn't need that level of identification to do so. I don't know if a floating grocery bag is visible on radar, but anything bigger needs to be avoided.
Classification of objects as a boulder or a moose is very important beyond detection of an object because the response should be different for 'benign' objects seen by the detection system. Executing a emergency stop for all objects detected is dangerous too.
Huh? Can you name an object besides a floating grocery bag or newspaper that the car should choose to hit instead of avoiding? If I'm approaching an object that's got a cross section of 12 square feet, I'm stopping no matter what it is!
 
  • #126
https://jalopnik.com/lidar-maker-velodyne-blame-to-uber-in-fatal-self-drivin-1824027977

She said that lidar has no problems seeing in the dark. “However, it is up to the rest of the system to interpret and use the data to make decisions. We do not know how the Uber system of decision-making works,” she added.

Recognizing pedestrians continues to be a challenge for autonomous technology, which will be part of the focus of the investigation. Thoma Hall suggested that those answers will be found at Uber, not Velodyne.
...
That jibes with comments from experts who study autonomous cars. Earlier this week, University of South Carolina law professor Bryant Walker Smith told Jalopnik that Uber’s equipment “absolutely” should’ve detected Herzberg on the road.

The issue, he said, is that the tech quite likely “classified her as something other than a stationary object.”
 
  • #127
nsaspook said:
Is there evidence that 'your 'driving directly contributed to the death of the individual and was the standard of your driving below or well below the standard of what is expected from a reasonable driver in those circumstances. Evidence that suggests the pedestrian stepped/moved/was detected in front of you in conditions leaving you little or no chance to avoid or stop must be given great weight on who to blame.
Agreed, but I think in this case investigators will say that both the "safety driver" and car should have easily been able to avoid this collision, placing the blame heavily on them (Uber).
A similar case with no automation.
http://www.daily-chronicle.com/2018/03/04/genoa-pedestrian-39-hit-and-killed-on-route-72/amriqy8/
Similar, yes, but there is not enough information there for us to say how similar.
 
  • #128
nsaspook said:
The issue, he said, is that the tech quite likely “classified her as something other than a stationary object.”
.
Well software which classifies a pedestrian in the same category as a windblown plastic bag needs something improved alright.
 
  • Like
Likes HAYAO, russ_watters and nsaspook
  • #129
olivermsun said:
No offense, but you are being a little condescending here. I've been a reasonably serious photographer for a few decades now, including paid work for some years. I'm professionally interested in sensor design and data processing, so if you have some better sources I am all ears (eyes?).
I didn't know, so I asked - but that is shocking to me. Here's some sources:
http://clarkvision.com/imagedetail/eye-resolution.html
https://photo.stackexchange.com/que...-human-eye-compare-to-that-of-digital-cameras
https://www.cambridgeincolour.com/tutorials/cameras-vs-human-eye.htm#sensitivity
A big part of that problem is metering. Use a modern digital sensor and "develop" the raw data correctly, you will realize that you have more dynamic range available than your output medium is capable of displaying.
I'm aware of this, but it is highly unlikely a dashcam is shooting raw 14bit (per channel) images. More likely 8bit.

Please note: one of the big advantages of cell phones - which is probably what was used in that demonstration - is that they are little computers and as a result can do on-the-fly post processing that stand-alone cameras often can't (though they are getting better). I just took a few photos/videos with a quality point and shoot (Panasonic Lumix DMC-ZS50) and my Samsung Galaxy S8 and the difference is big:

Lumix.jpg
Galaxy.jpg


The Lumix has the far superior camera and lens, but the Galaxy is clearly post-processing the video to boost the brightness (albeit making it noisy) before writing it to disk...and even then, the scene from the Canon is far inferior to what I could see with my eyes.

Given your experience, I suspect you are viewing this from a perspective of high-end equipment that doesn't match well with what we are dealing with here.
 

Attachments

  • Lumix.jpg
    Lumix.jpg
    15.5 KB · Views: 672
  • Galaxy.jpg
    Galaxy.jpg
    30 KB · Views: 698
  • #130
russ_watters said:
I didn't know, so I asked - but that is shocking to me.
I'm confused. Which part is shocking to you?

I am not sure what I am to take away from those sources. We are talking about instantaneous dynamic range, I assume, not the dynamic range of the eye-brain system allowing for tens of minutes of accommodation. What do your sources say? What about actual studies of the human visual system?

I'm aware of this, but it is highly unlikely a dashcam is shooting raw 14bit (per channel) images. More likely 8bit.
If a video camera his being used for vehicle navigation/collision avoidance, it should probably be better than 8-bit. This is an experimental self-driving car that probably costs more than $150. I have no idea what is the source of the video we're being shown publicly, but the vehicle better have something more than an 8-bit backup camera sensor.

Please note: one of the big advantages of cell phones - which is probably what was used in that demonstration - is that they are little computers and as a result can do on-the-fly post processing that stand-alone cameras often can't (though they are getting better). I just took a few photos/videos with a quality point and shoot (Panasonic Lumix DMC-ZS50) and my Samsung Galaxy S8 and the difference is big:

View attachment 222622 View attachment 222623

The Lumix has the far superior camera and lens, but the Galaxy is clearly post-processing the video to boost the brightness (albeit making it noisy) before writing it to disk...and even then, the scene from the Canon is far inferior to what I could see with my eyes.

Given your experience, I suspect you are viewing this from a perspective of high-end equipment that doesn't match well with what we are dealing with here.

Plenty of modern cameras offer on-camera dynamic range adjustment, compression, and even editing. We're not talking ultra-high-end equipment, but current < $500 system cameras.

But to return to my earlier point, I can't see why a self-driving car would have a less capable camera and image processor than a cell phone (which these days is pretty darned good) or a cheap system camera. That would seem to be a poor engineering decision, given the total costs of the system and the goal of the demonstration.
 
Last edited:
  • #131
russ_watters said:
(
I agree with @gleem that the job of the AI is to avoid collisions with objects and it doesn't need that level of identification to do so. I don't know if a floating grocery bag is visible on radar, but anything bigger needs to be avoided.

Huh? Can you name an object besides a floating grocery bag or newspaper that the car should choose to hit instead of avoiding? If I'm approaching an object that's got a cross section of 12 square feet, I'm stopping no matter what it is!

Don't assume a Lidar computer vision system is similar to what the human eye or most video cameras sees.
The problem when using abstract sensors like Lidar is that you often have large possible false detection areas that must be filtered or classified using Feature extraction with a neural network like classification system.

8:00 12:00 in the video for examples.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3230992/
A camera image samples the intensity of a scene at roughly uniform angular intervals. Individual pixels have no notion of range (and therefore of the shape of the surface they represent), but the intensity of the pixels is assumed to be approximately invariant to viewpoint and/or range. As a consequence, the appearance of a feature is reasonably well described by a set of pixel values.

LIDARs also sample the scene at uniform angular intervals, but each sample corresponds to a range measurement. Critically, unlike cameras, the value of each “range pixel” is profoundly affected by the position and orientation of the sensor. As a result, it becomes non-trivial to determine whether two features encoded as a set of <angle, range> tuples match.

Because of these fundamental differences between cameras and LIDARs, there are some challenges if we want to extract features from LIDAR data using extractors from the computer vision field.
 
Last edited:
  • Like
Likes gleem, OmCheeto and olivermsun
  • #133
olivermsun said:
I'm confused. Which part is shocking to you?
That you haven't noticed the dynamic range problem that digital cameras have vs our eyes.
I am not sure what I am to take away from those sources. We are talking about instantaneous dynamic range, I assume, not the dynamic range of the eye-brain system allowing for tens of minutes of accommodation.
Agreed. I'm not seeing a problem in the sources.
What do your sources say?
From the third link:
Dynamic range* is one area where the eye is often seen as having a huge advantage. If we were to consider situations where our pupil opens and closes for different brightness regions, then yes, our eyes far surpass the capabilities of a single camera image (and can have a range exceeding 24 f-stops). However, in such situations our eye is dynamically adjusting like a video camera, so this arguably isn't a fair comparison.

new_dynamic-range_ex1a.jpg
new_dynamic-range_ex1b.jpg
new_dynamic-range_ex2b.jpg

Eye Focuses on Background Eye Focuses on Foreground Our Mental Image
If we were to instead consider our eye's instantaneous dynamic range (where our pupil opening is unchanged), then cameras fare much better. This would be similar to looking at one region within a scene, letting our eyes adjust, and not looking anywhere else. In that case, most estimate that our eyes can see anywhere from 10-14 f-stops of dynamic range, which definitely surpasses most compact cameras (5-7 stops), but is surprisingly similar to that of digital SLR cameras (8-11 stops).

On the other hand, our eye's dynamic range also depends on brightness and subject contrast, so the above only applies to typical daylight conditions. With low-light star viewing our eyes can approach an even higher instantaneous dynamic range, for example.
If the "arguably is not a fair comparison" part is what you are referring to, there are two things to take away:
1. Fair or not, it is real. Our eyes rapidly adjust and enable us to see a vastly higher dynamic range than video cameras. Perhaps the video camera could be programmed to take different exposures and combine them or post-process and boost the dark areas like my cell phone did, but clearly the video in question didn't.
2. Even including the "not a fair comparison" part, the eye is still much better than the camera instead of vastly better.
What about actual studies of the human visual system?
To be honest, I see this as such a mundane, everyday issue that it hadn't occurred to me to look for scientific studies of it. I'm not even sure what one would try to say with a study, since it might end up looking like a product comparison against a specific camera. I don't think that's a subject for scientific research and a little bit of googling came up empty. But if you have any sources that you think are more relevant, I'd be happy to read them.
If a video camera his being used for vehicle navigation/collision avoidance, it should probably be better than 8 bit.
Agreed. Please note, the quote of mine you responded to was discussing the dashcam footage. I would hope the navigation/collission avoidance cameras are better.
This is an experimental self-driving car that probably costs more than $150. I have no idea what is the source of the video we're being shown publicly, but the vehicle better have something more than an 8-bit backup camera sensor.
It says "dashcam", so I assume it is a commercially available dash cam. No doubt it cost more than $150 (not sure where that number comes from), but I highly doubt even higher-end dash cams record in anything higher than normal HD, at 10bits color depth (and compressed). The file sizes would be unwieldy.
Plenty of modern DSLRs offer on-camera dynamic range adjustment, compression, and even editing.
Ok. Clearly, the dashcam used in the video we are discussing was not a DSLR and didn't do that sort of processing.
 

Attachments

  • new_dynamic-range_ex1a.jpg
    new_dynamic-range_ex1a.jpg
    5 KB · Views: 391
  • new_dynamic-range_ex1b.jpg
    new_dynamic-range_ex1b.jpg
    7.3 KB · Views: 368
  • new_dynamic-range_ex2b.jpg
    new_dynamic-range_ex2b.jpg
    7.8 KB · Views: 379
  • #134
russ_watters said:
That you haven't noticed the dynamic range problem that digital cameras have vs our eyes.
Russ, you seem be very assertive/aggressive even on topics even where you are not well informed. This isn't the first thread we've seen this.

russ_watters said:
That you haven't noticed the dynamic range problem that digital cameras have vs our eyes.
I have noticed that problem and followed it from the slide film era up till modern digital sensors. We've gone from roughly 7 stops to 15 in usable dynamic range. The problem today is representing a scene in a perceptually "correct" way even when the display medium cannot match that range. Furthermore you are talking about a video camera, with stacks of frames to work with, so HDR processing is completely possible.

Agreed. I'm not seeing a problem in the sources.
Seemingly you did not read them honestly and with a critical mind.

From the third link:

If the "arguably is not a fair comparison" part is what you are referring to, there are two things to take away:
1. Fair or not, it is real. Our eyes rapidly adjust and enable us to see a vastly higher dynamic range than video cameras. Perhaps the video camera could be programmed to take different exposures and combine them or post-process and boost the dark areas like my cell phone did, but clearly the video in question didn't.
The linked source says: "If we were to instead consider our eye's instantaneous dynamic range (where our pupil opening is unchanged), then cameras fare much better. This would be similar to looking at one region within a scene, letting our eyes adjust, and not looking anywhere else. In that case, most estimate that our eyes can see anywhere from 10-14 f-stops of dynamic range, which definitely surpasses most compact cameras (5-7 stops), but is surprisingly similar to that of digital SLR cameras (8-11 stops)."

The dynamic range needed to process the scene is a near-instantaneous dynamic range, not a range with allowance for adjustment of the human visual system over minutes or tens of minutes.

The video camera's metering would be expected to adjust over a period of seconds, not minutes. Of course the video camera should process the image to return a reasonable "brightness." What the heck do you think an autometering/autoexposure system does?

The dynamic range that is being quoted is also outdated.

2. Even including the "not a fair comparison" part, the eye is still much better than the camera instead of vastly better.
Again, selective and dishonest reading of the sources.

To be honest, I see this as such a mundane, everyday issue that it hadn't occurred to me to look for scientific studies of it. I'm not even sure what one would try to say with a study, since it might end up looking like a product comparison against a specific camera. I don't think that's a subject for scientific research and a little bit of googling came up empty. But if you have any sources that you think are more relevant, I'd be happy to read them.
Yes, why the heck on a science forum would one look up scientific evidence before posting an opinion?
 
Last edited:
  • #135
olivermsun said:
Russ you seem very ignorant but assert

Agreed. I'm not seeing a problem in the sources.
Your reply includes a broken double-quote, and the above is all the content I see from you. Perhaps you are editing it, but anyway, if you have an explanation of your position to offer, I'm all ears (eyes).
 
  • #137
nsaspook said:
Don't assume a Lidar computer vision system is similar to what the human eye or most video cameras sees.
I'm not - I didn't even mention LIDAR in this part of the discussion. This is about what the human driver saw (should have seen) vs the dashcam footage. When people initially saw the dashcam footage, they concluded from it - incorrectly - that it showed that the human driver would not have been able to see the pedestrian. And I'm explaining why the dashcam footage is so poor and why videos that other people have uploaded show a brighter scene.
 
  • Like
Likes Ryan_m_b and nsaspook
  • #139
I'll re-reply to add:
nsaspook said:
8:00 12:00 in the video for examples.
I just watched from about 7-13 in the video and it's very interesting. Much I've seen before, but not to that level of detail. It's amazing to me that the cars can do as well as they do in these extremely complicated situations. But this accident was not a complicated situation at all -- indeed, the scenario presented at 12:15 was far more complex (albeit lower speed) than the one we're discussing. I did note that he showed the car braking to avoid a flying bird -- not a choice I would make, and I wonder how it does react to a plastic bag.
 
  • Like
Likes nsaspook
  • #140
nsaspook said:
I see it as a positive. Maybe they will stop jaywalking like manics after this unfortunate incident.
...
I don't have a problem with Jaywalkers. I do have a problem with people who don't look both ways before crossing the street.

Videos taken from the inside of our local 50 TON commuter trains:

 
  • Like
Likes russ_watters

Similar threads

Replies
123
Views
11K
Replies
19
Views
11K
Replies
1
Views
9K
Replies
13
Views
3K
Back
Top