First fatal accident involving a car in self driving mode

In summary, the Tesla self-driving car was involved in a fatal crash. The car was programmed to ignore things that only have components "in the air" without a ground connection (e.g. overhead road signs).
  • #36
"Major accident" is again a term based in human common sense. There is no major accident sensor. Structural integrity sensor grids are impractical in any kind of complex machine in mass public use. Accelerometers are practical, which the vehicle must have. So the vehicle encountered something like, I dunno, a 5g or 10g rearward spike for a few milliseconds and post event it is still moving otherwise unimpaired as far as the sensors show; the road ahead is now perfectly clear. What should the software do? Emergency stop, regardless of road conditions and traffic? Maneuver off the road, close to the ditch?

Suppose that sensor signal was instead from unavoidably hitting a deer on a busy highway with traffic moving 75 mph? Me, I pull off the road as soon as I gauge the safety of the shoulder allows it; I don't emergency stop on the highway.
 
Last edited:
  • Like
Likes russ_watters
Physics news on Phys.org
  • #37
tesla-truck-accident-31.jpg

The Florida Highway Patrol described the events:

When the truck made a left turn onto NE 140th Court in front of the car, the car’s roof struck the underside of the trailer as it passed under the trailer. The car continued to travel east on U.S. 27A until it left the roadway on the south shoulder and struck a fence. The car smashed through two fences and struck a power pole. The car rotated counter-clockwise while sliding to its final resting place about 100 feet south of the highway.

Here’s our birds-eye visualization of what happened based on the information released by the police:
http://electrek.co/2016/07/01/understanding-fatal-tesla-accident-autopilot-nhtsa-probe/
 
  • Like
Likes Monsterboy and mheslep
  • #38
mheslep said:
"Major accident" is again a term based in human common sense. There is no major accident sensor. Structural integrity sensor grids are impractical in any kind of complex machine in mass public use. Accelerometers are practical, which the vehicle must have. So the vehicle encountered something like, I dunno, a 5g or 10g rearward spike for a few milliseconds and post event it is still moving otherwise unimpaired as far as the sensors show; the road ahead is now perfectly clear. What should the software do? Emergency stop, regardless of road conditions and traffic? Maneuver off the road, close to the ditch?

A structural integrity sensor could be something as simple as a wire current loop in the roof supports to detect if it's missing. Yes, Emergency stop is a good option if the other option is to continue blindly into a house, building or group of people. It's just dumb luck there was a open field in this case.
 
  • #39
nsaspook said:
A structural integrity sensor could be something as simple as a wire current loop in the roof supports to detect if it's missing.
And in every structural piece of the vehicle, with wiring running across all structural components, all returning to some hub. A break in any part of which would need to trigger, what, an emergency stop, over the life of the vehicle. It's not done because its not remotely practical.
 
  • #40
mheslep said:
And in every structural piece of the vehicle, with wiring running across all structural components, all returning to some hub. A break in any part of which would need to trigger, what, an emergency stop, over the life of the vehicle. It's not done because its not remotely practical.

I agree it would be impractical to cover the entire car but the roof being a part of the car that can be easily completely destroyed (a missing roof after a car runs under a truck is not uncommon) while leaving most of the cars important functions operational would be practical to sense with simple trip wire interlocks.
 
  • #41
mheslep said:
That's profoundly myopic, akin to saying there's not much difference between the latest clever soccer playing robot and human beings, because they both can make a basic play plan, kick the ball. I say profoundly, because navigating the world and dealing with others out there on the road is not just a bit more complex than kicking a ball but vastly more complex, and so are the consequences for getting it wrong.

From the point of view of assessing risk there is no difference. What do you know about the state of AI in regards to automated cars? Nothing I'm sure, but the developers of these systems and the companies investing millions of dollars in R&D into developing this technology seem confident that it is a solvable problem, and I'm willing to believe people who put their money where their mouth is.

What does the complexity of the problem have to do with risk assessment? At the end of the day, if the statistics show that automated cars do at least as well as human drivers, then from a public policy point of view, automated cars shouldn't really be seen as much different than public or private transportation - one entity is in operation of a vehicle as a service for another. You seem to be of the opinion that if an automated car makes a mistake and kills someone, that is somehow worse than if a human makes the same mistake and kills someone. This is pure BS and just fear of something new. You're use to the idea of humans messing up and killing people, so you're ok with it. Robots making a mistake and killing people is scary and horrible - even if at the end of the day far fewer people will die in car accidents.
 
  • #42
mheslep said:
If autonomous was the same as cruise control it would be no better than cruise control. Autonomous means ... autonomous, i.e. to "act independently", and not with supervision, by definition.
Does Tesla call its autopilot "autonomous"? But I don't want to argue about semantics. You asked where the advantage of autopilot is if you still have to watch traffic. And the answer is: the same as for cruise control, just better. It is more convenient.

Every kitchen knife is a potentially deadly weapon - you can kill yourself (or others) if you use it in the wrong way. There are deadly accidents with knifes all the time. Why do we allow selling and using kitchen knifes? Because they can be used properly for useful things.

Teslas autopilot is not a new knife, it is a feature for the knife that can improve the safety - if you use it properly. If you use it in a completely wrong way, you still get the same accident rate as before.

nsaspook said:
Emergency stop is a good option if the other option is to continue blindly into a house, building or group of people. It's just dumb luck there was a open field in this case.
Was the car blind? I would expect that it would do an emergency stop in that case, but where do you see indications of the car being blind to the environment?
 
  • #43
mfb said:
Was the car blind? I would expect that it would do an emergency stop in that case, but where do you see indications of the car being blind to the environment?

When it veers off the lane, travels across a field hitting two fences and a pole.
 
Last edited:
  • Like
Likes Monsterboy and russ_watters
  • #44
One would expect that, just like airbags can be deployed by a sensor that reads high deceleration, the autpilot should be designed to sense any impact of sufficient magnitude.

If the autopilot senses g-forces that exceed some limit (such as part of the car impacting a tractor trailer, or even a guard rail) it should slow to a stop just to be on the safe side.
 
  • Like
Likes Monsterboy
  • #45
From the point of view of assessing risk there is no difference
Yes there is, a substantial difference. The raw accident rate is not the the only metric of significance. The places and kinds of accidents would change, the share of accidents involving children, pedestrians, and the handicapped would change, the share of accidents involving mass casualties would change.

dipole said:
... What do you know about the state of AI in regards to automated cars? Nothing I'm sure, ...
Quite a bit; I worked on vision system of one of the first vehicles that might be called successful. To realize these vehicles are sophisticated *and* utterly oblivious machines, I think one should ride in one along a dirt road, see the software handle the vehicle with preternatural precision for hundreds of miles, and then of a sudden head off the road at speed towards a deep gully to avoid a suddenly appearing dust devil.
 
  • #46
mheslep said:
Quite a bit; I worked on vision system of one of the first vehicles that might be called successful. To realize these vehicles are sophisticated *and* utterly oblivious machines, I think one should ride in one along a dirt road, see the software handle the vehicle with preternatural precision for hundreds of miles, and then of a sudden head off the road at speed towards a deep gully to avoid a suddenly appearing dust devil.

I wouldn't say you worked on one that was successful. ;)
 
  • #47
DaveC426913 said:
One would expect that, just like airbags can be deployed by a sensor that reads high deceleration, the autpilot should be designed to sense any impact of sufficient magnitude.
Either the g magnitude or time of the shock was insufficient to trip the air bags in this crash, i.e. 7 g's. Would you come to stop on major highway, traffic doing 75 mph, if you had hit, say, a deer, then thrown clear of the vehicle, continue until you could maneuver off road?

Trying to come up with Yet Another Simple Sensor Rule might make the vehicle a bit safer, or not, but it is nothing like scene understanding which people possess, the holy grail of AI. Declaring the vehicle safe based sensor limits is a mistake.
 
  • Like
Likes Monsterboy
  • #48
mheslep said:
Either the g magnitude or time of the shock was insufficient to trip the air bags in this crash,
Huh. Did not know that.
 
  • #49
mheslep said:
Would you come to stop on major highway, traffic doing 75 mph, if you had hit, say, a deer, then thrown clear of the vehicle, continue until you could maneuver off road?

I've seen people do both in similar situations.

Trying to come up with Yet Another Simple Sensor Rule might make the vehicle a bit safer, or not, but it is nothing like scene understanding which people possess, the holy grail of AI. Declaring the vehicle safe based sensor limits is a mistake.

Sure, but how much do people understand the scene? It is not just a question of what humans can do, but what they actually do.

In principle you could reduce this to absurdity. For example, should the car recognize that the tire on the adjacent car is low on pressure and dangerous? Also, which is more likely to have an advantage during an emergency, a planned strategy based on the physics of the car, or a panic reaction?
 
  • #51
russ_watters said:
I agree and I'm utterly shocked that autonomous cars have been allowed with very little government oversight so far. We don't know how they are programmed to respond to certain dangers or what Sophie's choices they would make. We don't know what they can and can't (a truck!) see.
How do you know when an object in the field-of-view has a signal-to-noise ratio too low to be detected?

It isn't just Tesla's autopilot that has that problem. Many accidents have happened when human drivers had their vision momentarily interrupted by reflection or by glare caused by scattering or refraction.
 
  • #52
russ_watters said:
I agree and I'm utterly shocked that autonomous cars have been allowed with very little government oversight so far. We don't know how they are programmed to respond to certain dangers or what Sophie's choices they would make. We don't know what they can and can't (a truck!) see.
The same can be said about human driven cars, word for word.
There were 29,989 fatal motor vehicle crashes in the United States in 2014 in which 32,675 deaths occurred.
This resulted in 10.2 deaths per 100,000 people and 1.08 deaths per 100 million vehicle miles traveled.
http://www.iihs.org/iihs/topics/t/general-statistics/fatalityfacts/state-by-state-overview
 
  • Like
Likes mfb
  • #53
mfb said:
Why?
How exactly do you expect having test data to make the software or simulations worse?
[emphasis added]
Huh? Was that a typo? Your previous question was "For any given point in time, do you expect anything to be better if we delay implementation of the technology?" How did "better" become "worse"?

I'm advocating more data/testing before release, not less. Better data and better simulations, resulting in safer cars. I can't imagine how my meaning could not be clear, but let me describe in some detail how I think the process should go. Some of this has probably happened, most not. First a little background on the problem. The car's control system must contain at least the following elements:
A. Sensor hardware to detect the car's surroundings.
B. Control output hardware to actually steer the car, depress the brakes, etc.
C. A computer for processing the needed logic:
D. Sensor interpretation logic to translate the sensor inputs into a real picture of the car's surroundings.
E. Control output logic to make the car follow the path it is placed on.
F. Decision-making logic to determine what to do when something doesn't go right.

Here's the development to release timelime that I think should be taking place:
1. Tesla builds and starts selling a car, the Model S, with all the hardware they think is needed for autonomous control. This actually happened in October of 2014.

2. Tesla gathers data from real-world driving of the car. The human is driving, the sensors just passively collecting data.

3. Tesla uses the data collected to write software for the various control components and create simulations to test the control logic.

4. Tesla installs the software to function passively in the cars. By this I mean the car's computer does everything but send the output to the steering/throttle/brake. The car records the data and compares the person's driving to the computer's simulation of the driving. This would flag major differences between behaviors so the software could be refined and point to different scenarios that might need to be worked-out in simulation.

5. Tesla deploys a beta test of the system using a fleet of trained and paid test "pilots" of the cars, similar to how Google had an employee behind the wheel of their Street View autonomous cars. These drivers would have training on the functional specs of the car and its behaviors -- but most of all, how to behave if they think the car may be malfunctioning (don't play "chicken" with it).

6. Tesla makes the necessary hardware and software revisions to finalize the car/software.

7. Tesla produces a report of the beta program's results, the functional specifications of the autopilot and a few test cars for the Insurance Institute and NHTSA for their approval.

8. The autopilot is enabled (this actually happened in October of 2015).

Each of these steps, IMO, should take 1-2 years (though some would overlap) and the total time from first test deployment to public release should take about a decade. Tesla actually enabled the feature about 1 year after the sensors started being installed in the cars, so it couldn't possibly have done much of anything with most of the steps and we know for sure they made zero hardware revisions to the first cars with the capability (which doesn't mean they haven't since improved later cars' sensor suites). Since the software is upgraded "over the air", the cars have some communication ability with Tesla, but how much I don't know. Suffice to say though the amount of data these cars generate and process would have to be in the gigabytes or terabytes per hour range. A truly massive data processing effort.

So again: Tesla has created and implemented the self-driving features really, really fast and using the public as guinea pigs. That, to me, is very irresponsible.
 
Last edited:
  • Like
Likes Pepper Mint
  • #54
eltodesukane said:
The same can be said about human driven cars, word for word.
Yes, but human drivers have something Tesla's autopilot doesn't: a license. I'm suggesting, in effect, that the Telsa autopilot should have a license before being allowed to drive.
 
  • #55
mheslep said:
Of what use is a would-be autonomous vehicle which requires full time monitoring? I see a wink-wink subtext in the admonishment to drivers from Tesla.
Agreed. It is irresponsible (and in most cases illegal) to watch a movie while your autopilot is driving the car. But it is also irresponsible to enable a feature that can be so easily misused (because it is a self-contradictory feature).

Moreover, a product that is in "beta" testing is by definition not ready for public release. It is spectacularly irresponsible to release a life safety critical device into the public domain if it isn't ready. All of our nuts and bolts and bolts discussion of the issue is kinda moot since Tesla readily admits the system is not ready to be released.
 
Last edited:
  • Like
Likes Greg Bernhardt
  • #56
mfb said:
Same as cruise control. You still have to pay attention, but you don't have to do the boring parts like adjusting your foot position by a few millimeters frequently.
The difference between "autonomous" and "cruise control" is pretty clear-cut: with cruise control, you must still be paying attention because you are still performing some of the driving functions yourself and the functions overlap so performing one (steering) means watching the others in a way that makes you easily able to respond as needed (not running into the guy in front of you). And, more importantly, you know that's your responsibility.

Autonomous, on the other hand, is autonomous.

Or, another way, with more details: With cruise control and other driver assist features, the driver has no choice but to maintain control over the car. Taking a nap isn't an option. So they are still wholly responsible for if the car crashes or drives safely. With an "autonomous" vehicle, even one that has the self-contradictory requirement of having a person maintain the capability to take back control, the car must be capable of operating safely on its own. Why? Because the reaction time of a person who doesn't expect to have to take over control is inherently longer than a person who does expect to have to take over control. For example, if you have your cruise control on and see brake lights in front of you, you know it is still up to you to apply the brakes. If you have the car in autopilot and see brake lights, you assume the car will brake as necessary and so you don't make an attempt to apply the brakes. In a great many accident scenarios, this delay will make it impossible for the human to prevent the accident if the car doesn't do its job. In the accident that started this thread, people might assume that the driver was so engrossed in his movie that he never saw the truck, but it is also quite possible that he saw the truck and waited to see how his car would react.

So not only is it irresponsible to provide people with this self-contradictory feature, Tesla's demand that the people maintain the responsibility for the car's driving is an inherently impossible demand to fulfill.
 
Last edited:
  • Like
Likes OCR and mheslep
  • #57
dipole said:
What do you know about the state of AI in regards to automated cars? Nothing I'm sure...
I know one thing, per Tesla's own description of it: it isn't ready for public release.
...but the developers of these systems and the companies investing millions of dollars in R&D into developing this technology seem confident that it is a solvable problem, and I'm willing to believe people who put their money where their mouth is.
I'm all for them solving the problem before implementing it.

Imagine you're a Tesla software engineer. You're the guy who is collecting and dealing with all the bug reports. I wonder if one of them already had on his list a "Can't see white objects against a bright but cloudy sky" bug on his list? How would you feel? How would the police feel about it if a known bug killed a person?
 
  • #58
russ_watters said:
mfb said:
Why?
How exactly do you expect having test data to make the software or simulations worse?
Huh? Was that a typo? Your previous question was "For any given point in time, do you expect anything to be better if we delay implementation of the technology?" How did "better" become "worse"?
It was not a typo. You seem to imply that not collecting data now (=having autopilot as option) would improve the technology. In other words, collecting data would make things worse. And I wonder why you expect that.
russ_watters said:
I'm advocating more data/testing before release, not less. Better data and better simulations, resulting in safer cars.
Your suggestion leads to more traffic deaths, in every single calendar year. Delaying the introduction of projects like the autopilot delays the development process. At the time of the implementation, it will be safer if the implementation is done later, yes - but up to then you have tens of thousands of deaths per year in the US alone, most of them avoidable. The roads in 2017 with autopilot will (likely) have fewer accidents and deaths than the roads in 2017 if autopilot would not exist on the roads. And the software is only getting better, while human drivers do not. In 2018 the software will be even better. How long do you want to delay implementation?
russ_watters said:
Yes, but human drivers have something Tesla's autopilot doesn't: a license. I'm suggesting, in effect, that the Telsa autopilot should have a license before being allowed to drive.
The autopilot would easily get the human driver's license if there was one limited to the roads autopilot can handle.
russ_watters said:
For example, if you have your cruise control on and see brake lights in front of you, you know it is still up to you to apply the brakes.
I can make the same assumption with more modern cruise control software that automatically keeps a safe distance to the other car. And be wrong in exactly the same way.
russ_watters said:
So not only is it irresponsible to provide people with this self-contradictory feature, Tesla's demand that the people maintain the responsibility for the car's driving is an inherently impossible demand to fulfill.
It is inherently impossible to watch the street if you don't have to steer? Seriously? In this case I cannot exist, because I can do that as passenger, and every driving instructor does it as part of their job on a daily basis.
 
  • #59
How does one guard against the mentality of "Hey y'all, hold my beer and watch this !" that an autopilot invites ?
Not build such overautomated machines
or make them increasingly idiot proof ?
jim hardy said:
With the eye scan technology we have, that autopilot sh could have been aware of where the driver was looking.

Increasing complexity is Mother Nature's way of "confusing our tongues".
 
  • #60
mfb said:
It was not a typo. You seem to imply that not collecting data now (=having autopilot as option) would improve the technology. In other words, collecting data would make things worse. And I wonder why you expect that.
I don't understand. I'm very explicitly saying I want to collect more data, not less data and you are repeating back to me that I want to collect less data and not more data. I'm really not sure where to go from here other than to request that you paint for me a detailed picture of the scenario you are proposing (or you think is what Tesla did) and then I can tell you how it differs from what I proposed.
Your suggestion leads to more traffic deaths, in every single calendar year.
That's an assumption on your part and for the sake of progressing the discussion, I'll assume it's true, even though it may not be.

Along a similar vein, the FDA stands in the way of new drug releases, also almost certainly costing lives (and certainly also saving lives). Why would they do such a despicable thing!? Because they need the new drug to have proven it is safe and effective before releasing it to the public.

See, with Tesla you are assuming it will, on day 1, be better than human drivers. It may be or it may not be. We don't know and they weren't required to prove it. But even if it was, are you really willing to extend that assumption to every car company? Are you willing to do human trials on an Isuzu or Pinto self driving car?

Verifying that something is safe and effective (whether a drug or a self-driving car) is generally done before releasing it to the public because it is irresponsible to assume it will be good instead of verifying it is good.
Delaying the introduction of projects like the autopilot delays the development process.
It doesn't delay the development process, it extends the development process prior to release. I think maybe the difference between what you are describing and what I am describing is that you are suggesting that development can occur using human trials. I think that suggestion is disturbing. But yeah, you are right: if you skip development and release products that are unfinished, you can release products faster.
At the time of the implementation, it will be safer if the implementation is done later, yes - but up to then you have tens of thousands of deaths per year in the US alone, most of them avoidable.
Maybe. And maybe not. But either way, yes, that's how responsible product development works.
I can make the same assumption with more modern cruise control software that automatically keeps a safe distance to the other car. And be wrong in exactly the same way.
Agreed. So I'd also like to know the development cycle and limitations of such systems too. But we're still two steps removed from the full autopilot there (1. The human knows there is a limit to the system's capabilities, 2. The human is still an inherently essential part of the driving system), but it is definitely a concern of mine.
It is inherently impossible to watch the street if you don't have to steer? Seriously?
Huh? I don't think you are reading my posts closely enough. I don't think what I was describing was that difficult and that bears no relation to it. Here's what I said: (it was in bold last time too): Because the reaction time of a person who doesn't expect to have to take over control is inherently longer than a person who does expect to have to take over control.

An auto-brake feature (for example) can reliably override a human if the human makes a mistake. A human cannot reliably override a computer if the computer makes a mistake. That's why it is unreasonable to expect/demand that a person be able to override the computer if it messes-up. There is nothing stopping them from trying, of course -- but they are very unlikely to reliably succeed.
 
  • #61
russ_watters said:
I don't understand. I'm very explicitly saying I want to collect more data, not less data and you are repeating back to me that I want to collect less data and not more data.
The difference is the time. You suggest to have less actual driving data at the end of 2016, less data at the end of 2017, and so on.

You suggest to have more test data at the time of the introduction. But that time is different for the two scenarios. That is the key point.

russ_watters said:
Along a similar vein, the FDA stands in the way of new drug releases, also almost certainly costing lives (and certainly also saving lives). Why would they do such a despicable thing!? Because they need the new drug to have proven it is safe and effective before releasing it to the public.
And how do you prove it? After laboratory and animal tests, you test it at progressively larger groups of humans who volunteer to test the new drug, first with healthy humans and then with those who have an application for the drug. That's exactly what Tesla did, where the lab/animal steps got replaced by test cars driving around somewhere at the company area.

russ_watters said:
See, with Tesla you are assuming it will, on day 1, be better than human drivers. It may be or it may not be. We don't know and they weren't required to prove it. But even if it was, are you really willing to extend that assumption to every car company? Are you willing to do human trials on an Isuzu or Pinto self driving car?
I personally would wait until those cars drove some hundreds of thousands of kilometers. But no matter how long I would wait, if the software requires the driver to pay attention, I would (a) do that and (b) even if I wouldn't, if the accident rate is not above that of human drivers, I wouldn't blame the company to release such a feature for tests.
russ_watters said:
But yeah, you are right: if you skip development and release products that are unfinished, you can release products faster.
When do you expect the development of self-driving cars to be finished? What does "finished" even mean? At a state where no one can imagine how to improve the software? Then we'll never get self-driving cars.

Airplane development is still ongoing, 100 years after the brothers Wright built the first proper airplane. Do we still have to wait to decrease accident rates further before we can use them? Yes this is an exaggerated example, but the concept is the same. Early airplanes had large accident rates. But without those early airplanes we would never have our modern airplanes with less than one deadly accident per million flights.

russ_watters said:
A human cannot reliably override a computer if the computer makes a mistake.
"Oh **** there is a truck in front of me, I'll brake no matter what the car would do" should work quite well. With some driving experience it's something you would not even have to think about when driving yourself.
 
  • #62
Jenab2 said:
How do you know when an object in the field-of-view has a signal-to-noise ratio too low to be detected?

It isn't just Tesla's autopilot that has that problem. Many accidents have happened when human drivers had their vision momentarily interrupted by reflection or by glare caused by scattering or refraction.
Humans don't entirely miss tractor trailers in daylight, clear weather for several seconds. Nor per reports did the vehicle in this case; the problem was the vehicle had no concept of what a truck is, mistaking the truck trailer's high ground clearance for clear path.
 
  • Like
Likes Monsterboy, russ_watters and nsaspook
  • #63
russ_watters said:
I know one thing, per Tesla's own description of it: it isn't ready for public release.

I'm all for them solving the problem before implementing it.
...
I'd not take their description of "beta" to necessarily mean they were admiting they intentionally released an unsafe vehicle. Rather, the unsafe observation comes about from the details of this accident. Tesla ships new software out to their cars frequently that's unrelated to autonomous driving, software that changes the like of charging time and ride height; they call these releases beta sometimes but it does not mean the details are safety critical
 
  • #64
mfb said:
...

"Oh **** there is a truck in front of me, I'll brake no matter what the car would do" should work quite well. With some driving experience it's something you would not even have to think about when driving yourself.

How does singling out a case of "truck in front" change anything? Riding autonomously one will routinely see dozens of obstacles requiring the brake to avoid a collision: the red light intersection, the downed tree, etc. One either gets ahead of the software in every possible case, manually applying brakes, or never without detailed knowledge of the SW corner cases. In the former case, routine manual override, of what use are autonomous vehicles?
 
  • #65
mfb said:
The difference is the time. You suggest to have less actual driving data at the end of 2016, less data at the end of 2017, and so on.

You suggest to have more test data at the time of the introduction. But that time is different for the two scenarios. That is the key point.
Ok, sure -- more data before release, which means less data per year. Yes. More real world data at a certain calendar year by releasing early, less data before release if we release it early. Yes: I favor more data/testing before release and less "pre-release testing" (if that even has any meaning anymore) on the public.
And how do you prove it? After laboratory and animal tests, you test it at progressively larger groups of humans who volunteer to test the new drug, first with healthy humans and then with those who have an application for the drug. That's exactly what Tesla did, where the lab/animal steps got replaced by test cars driving around somewhere at the company area.
They did? When? How much? Where can I download the report, application and see the government approval announcement?

Again: they are referring to the current release as a "beta test" and the "beta test" started a calendar year after the first cars with the sensors started to be sold. Drugs don't have "beta tests" that involve anyone who wants to try them and both cars and drugs take much, much longer than one year to develop and test.
I personally would wait until those cars drove some hundreds of thousands of kilometers.
[assuming Tesla did that, which I doubt...] You're not the CEO of Isuzu. What if he disagrees and thinks it should only take dozens of miles? Does he get to decide his car is ready for a public "alpha test"? How would we stop him from doing that?
But no matter how long I would wait, if the software requires the driver to pay attention, I would (a) do that...
Your reaction time is inferior to the computer's: you are incapable of paying enough attention to correct some mistakes made by the car. Also, the software doesn't require the driver to pay attention, the terms and conditions do. That's different from driver assist features, where the software literally requires the driver to pay attention. I'll provide more details in my next post.
...and (b) even if I wouldn't, if the accident rate is not above that of human drivers, I wouldn't blame the company to release such a feature for tests.
Catch-22: you can't know that until after it happens if you choose not to regulate the introduction of the feature.
When do you expect the development of self-driving cars to be finished? What does "finished" even mean?
The short answer (for an unregulated industry) is that development is "finished" when a company feels confident enough in it that it no longer needs to describe it as a "beta test". The long answer is detailed in post #53.
At a state where no one can imagine how to improve the software? Then we'll never get self-driving cars.
Obviously, that never happens for anything.
Airplane development is still ongoing, 100 years after the brothers Wright built the first proper airplane. Do we still have to wait to decrease accident rates further before we can use them? Yes this is an exaggerated example, but the concept is the same.
No it isn't. Airplanes are certified by the FAA to be allowed to fly passengers before they are allowed to fly passengers. The test process is exhaustive and no airplane is ever put into service while described as a "beta test". The process takes a decade (edit: the 787 took 8 years to develop, of which 2 years from first flight to FAA certification, plus 2 prior years of ground testing).
"Oh **** there is a truck in front of me, I'll brake no matter what the car would do" should work quite well.
No it won't. The car is supposed to do the braking for you, so the behavior you described runs contrary to how the system is supposed to work.
With some driving experience it's something you would not even have to think about when driving yourself.
The particular driver this thread is discussing won't be getting additional experience to learn that. Hopefully some other people will.
 
Last edited:
  • #66
mheslep said:
I'd not take their description of "beta" to necessarily mean they were admiting they intentionally released an unsafe vehicle. Rather, the unsafe observation comes about from the details of this accident.
I didn't say Tesla admitted it was "unsafe", I said they admitted it was not ready for public release. The dice roll comes from not knowing exactly how buggy the software is until it starts crashing on you. When that happens with a Windows beta release, not a big deal...but this crash killed someone.
 
  • #67
Regarding semi-autonomous vs fully autonomous driving features:

I downloaded an Audi A7 owners' manual here:
https://ownersmanuals2.com/get/audi-a7-sportback-s7-sportback-2016-owner-s-manual-65244

On page 87, it says "Adaptive cruise control will not make an emergency stop." This leaves primary responsibility firmly in the lap of the driver, though it does leave an obvious open question: what constitutes and "emergency stop?" Well, there is an indicator light for it and a section describing when "you must take control and brake", titled "Prompt for driver intervention." Presumably, the Tesla that crashed did not prompt the driver that his intervention was required.

The system also requires the driver to set the following distance and warns about the stopping time required increasing with speed -- so more attention is required.

For "braking guard", it says: "The braking guard is an assist system and cannot prevent a collission by itself. The driver must always intervene."

Here's what AAA has to say:
In What Situations Doesn’t It Work?
ACC systems are not designed to respond to stationary or particularly small objects. Camera-based systems can be affected by the time of day and weather conditions, whereas radar-based systems can be obstructed by ice or snow. Surveys have shown that relatively few drivers are aware of these types of limitations, and may overestimate the system’s protective benefit. Some drivers also have difficulty telling when ACC (vs. standard cruise control) is active.
https://www.aaafoundation.org/adaptive-cruise-control

A wired article on the issue:
IT HAPPENED WHILE I was reviewing a European automaker’s flagship luxury sedan.

I was creeping along on Interstate 93 in Boston, testing the active cruise-control system and marveling at the car’s ability to bring itself to a stop every time traffic halted. Suddenly an overly aggressive driver tried muscling into traffic ahead of me.

Instead of stopping, however, my big-ticket sedan moved forward as if that jerk wasn’t even there. I slammed on the brakes — stopping just in time — and immediately shut off the active cruise control.

It was a stark reminder that even the best technology has limits and a good example of why drivers should always pay attention behind the wheel. But it got me thinking:

What would’ve have happened if I hadn’t stopped?
http://www.wired.com/2011/06/active-safety-systems/

That article has good discussion of the definitions (autonomous vs semiautonomous or assist functions) and the logic they bring to bear. It continues:
Their only purpose is to slam on the brakes or steer the car back into your lane if a collision is imminent. In other words, if you notice the systems at work, you’re doing it wrong. Put down the phone and drive.

That's a key consideration. No automaker wants to see drivers fiddling with radios and cell phones, lulled into a false sense of security and complacency thinking the car will bail them out of trouble.
That was exactly the problem with the accident in question: the person driving thought the car would avoid trouble because it was supposed to be capable of avoiding trouble. Contrast that with the "assist" features, which *might* help you avoid trouble that you are already trying to avoid yourself.

These features are wholly different from an autopilot, which is intended to be fully autonomous.

It continues, for the next step of where this particular incident might go:
Of the three possibilities, Baker said liability would be easiest to prove if a system failed to work properly.

“It would be analogous to a ‘manufacturing defect,’ which is a slam dunk case for plaintiffs in the product liability context,” he said.
 
  • #68
Interestingly, here is a slide from a Tesla presentation a couple of months ago that has quite a few similarities with the path I described, if missing the last part (the approval):

tesla-autopilot-development-process.png


http://electrek.co/2016/05/24/tesla-autopilot-miles-data/

Given that the feature was activated one year after the hardware first went on sale, the "simulation" couldn't have been based on real world test data from the car and the "in-field performance validation" must have been less than one year.

Tesla sold roughly 50,000 model S's in 2015. At an average of 10,000 miles per year per car and a constant sales rate, that's 250 million miles of recorded data. With an average of 1 death per 100 million miles for normal cars, that means that at most all of this data would have recorded *3* deaths...at most 2 fatal crashes. That's not a lot and nowhere near enough to prove they are safer than human drivers (in deaths anyway).

And, of course, all of that assumes they are driven in autopilot in situations that are as risky as average -- which is almost certainly not true.
 
  • #69
mheslep said:
How does singling out a case of "truck in front" change anything? Riding autonomously one will routinely see dozens of obstacles requiring the brake to avoid a collision: the red light intersection, the downed tree, etc. One either gets ahead of the software in every possible case, manually applying brakes, or never without detailed knowledge of the SW corner cases. In the former case, routine manual override, of what use are autonomous vehicles?
I used the specific example, but it does not really matter. If the reason to react is visible ahead, a normally working self-driving car won't brake in the last possible second, leaving enough time for the driver to see that the situation might get problematic. If a problem appears suddenly and requires immediate reaction, the driver can simply react, as they do in normal cars as well. Why would you wait for the car?

Also keep in mind that we are discussing "beyond the safetly level of a human driving a car" already: we are discussing how to reduce the accident rate even further by combining the capabilities of the car with those of the human driver.
mheslep said:
In the former case, routine manual override, of what use are autonomous vehicles?
Emergency situations are not routine. Autonomous vehicles are more convenient to drive - currently their only advantage. Once the technology is more mature, completely driverless taxis, trucks and so on can have a huge economic impact.

russ_watters said:
On page 87, it says "Adaptive cruise control will not make an emergency stop."
How many drivers read page 87 of the manual? Even worse if you see the car slowing down under normal conditions, but cannot rely on in emergencies.

Concerning the approval: as you said already, regulations are behind. There is probably no formal approval process that Tesla could even apply for.
 
  • #70
mfb said:
...If the reason to react is visible ahead, a normally working self-driving car won't brake in the last possible second, leaving enough time for the driver to see that the situation might get problematic...
That sounds completely unreasonable, but perhaps I misunderstand. You suggest the driver monitor every pending significant action of the would-be autonomous vehicle, and in the, say, 2 or 3 seconds before a problem, if the vehicle fails to react in the first second or so the human driver should always stand ready to pounce? During test and development, this is indeed how the vehicles are tested in my experience, generally only at very low speeds. But I can't see any practical release to the public under those conditions.

Emergency situations are not routine.
The case of this truck turning across the highway *was* routine. The situation turned from routine to emergency in matter of seconds, as would most routine driving when grossly mishandled.
 

Similar threads

Back
Top