The actual prospect of autonomous cars?

  • Thread starter Gear300
  • Start date
  • Tags
    Cars
In summary: would be embedded as transponders in the road, while traffic light status should be transmitted independently, without any need to see the light.
  • #36
Personally, I think that towards the end of the century (perhaps sooner) we will definitely have the technology whereby fully autonomous vehicles will be considerably safer than those with drivers. BUT ... I am less confident that we will have fully worked out (1) sufficient societal acceptance, (2) the necessary infrastructure, and (3) the legal issues (insurance, etc).

Technology is, relatively speaking, the easy part.
 
  • Like
Likes russ_watters, PeroK and Bystander
Physics news on Phys.org
  • #37
anorlunda said:
I also reject her main point that self driving cars get rear ended too often because they stop for objects on the road that are difficult to identify. I say that human drivers are at fault if they make snap decisions to run over some objects. That plastic bag on the road might contain a kitten. A child's ball rolling toward the road might be followed by a child. So if I stop for any object in or near the road and get rear ended, the collision is not my fault. Ditto for an AI driver.
Human drivers (good ones, anyway) look in their rear view mirror as they slam on the brakes to save the bunny in the road. If the car behind is too close or on their phone, the bunny loses.
 
  • Like
Likes russ_watters
  • #38
gmax137 said:
Human drivers (good ones, anyway) look in their rear view mirror as they slam on the brakes to save the bunny in the road. If the car behind is too close or on their phone, the bunny loses.
So your defense would be: "Sorry your honor, that baby looked like a bunny to me."
 
  • Like
Likes russ_watters
  • #39
anorlunda said:
So your defense would be: "Sorry your honor, that baby looked like a bunny to me."
No, but that goes to the point: it is OK to run over some things in the road, but not OK for other things. There is an intelligent assessment of damage to the "object" in the road, damage to the car, damage from the following car. If there's no one in the oncoming lane, the best choice could be to cross the lines into that lane. Blindly braking hard whenever anything is in the road is too crude.
 
  • Like
Likes russ_watters
  • #40
phinds said:
Personally, I think that towards the end of the century (perhaps sooner) we will definitely have the technology whereby fully autonomous vehicles will be considerably safer than those with drivers. BUT ... I am less confident that we will have fully worked out (1) sufficient societal acceptance, (2) the necessary infrastructure, and (3) the legal issues (insurance, etc).

Technology is, relatively speaking, the easy part.
I should add, one of my big concerns about autonomous cars is that people in the U.S. will put up with tens of thousands of car deaths involving human drivers (we do about 40,000 / year) but let one person get killed by an autonomous vehicle and the manufacturer will never hear the end of it and will be sued by the relatives.
 
  • Like
Likes PeroK
  • #41
gmax137 said:
No, but that goes to the point: it is OK to run over some things in the road, but not OK for other things. There is an intelligent assessment of damage to the "object" in the road, damage to the car, damage from the following car. If there's no one in the oncoming lane, the best choice could be to cross the lines into that lane. Blindly braking hard whenever anything is in the road is too crude.
It may be useful to look at how AI training data is gathered and used.

My understanding is that Tesla gathers data from every Tesla. Not just the autodrive equipped cars, but all of them. And most importantly when manually driven. Multiple times per hour, each car can record what I call triplet data packets and send them wirelessly to Tesla.
  1. What did the 8 cameras, looking all directions, see? Other sensors can be included; slippery roads yes/no?
  2. What action did the driver take? steering/throttle/brakes
  3. What was the outcome? Nothing/accident/full stop
Given 2.5 million Teslas on the road, and say 10 triplets per hour, then they might generate say 25 trillion triplet examples per year to train their AI. There would be multiple examples of "Something that looks like X in front, no car behind, driver braked, no accident." and examples of "Something that looks like X in front, car behind, driver swerved, no accident." plus examples, of "Something that looks like X in front, car stops, accident results." There is no need to analyze what X really is, just what it looks like to the camera. The AI is being "taught" by human drivers.

Neural network AI is not really intelligence, it is merely pattern matching. When the data sensed look like A, the non-accident action is B, and avoid accident causing action C. That is not "blindly" choosing any course of action. Just the opposite, it is using all available data.

Neural networks do not reason. They do not use logic. They merely match patterns of input data with desired outputs. They call that AI or intelligence for marketing purposes, but really that is a misnomer.

A garage door opener, that refuses to close when something blocks the light beam, is an example of a 1 branch neural net. The door opener advertisement may say "AI smart door", but we know better.
 
  • Informative
Likes gmax137
Back
Top