Home / Business / Why we find automatic driving cars so scary

Why we find automatic driving cars so scary



Tesla CEO

Elon Musk

recently criticized the press criticizing the "more sacred hypocrisy than the big media companies that claim the truth, but publish only enough to sweeten the lie". That, he said, is "why the public no longer respects them."

Mr. Musk's frustration is driven in part by the media's widespread attention to accidents involving Tesla's self-taught "autopilot" feature. In the company's most recent earnings call, he said that "there are more than one million car deaths per year." And what do you read? Basically, none of them … but, if it is an autonomous situation, are the main news … Then write incendiary headlines that are fundamentally deceptive to readers. It's really scandalous. "

But not all accidents are created equal, although most experts on the subject (including myself) agree with Mr. Musk that autonomous vehicles can drastically reduce deaths related to The car in general, its criticism of the press reveals a misunderstanding about human nature and the perception of risk: even if they are safer in general, how and when they fail a lot.

If their mistakes imitate human errors, such as not being able to Bypassing a curve in a torrential rain or not realizing that a motorcycle is approaching from behind when changing lanes, people are probably accepting more the new technology After all, if you can reasonably make the same mistake, is it fair to hold Your car without a driver at a higher level? But if your failures seem strange and unpredictable, the adoption of this nascent technology will encounter serious resistance. However, this is likely to remain the case.

Would you buy a car that had a tendency, no matter how weird, to get off the road in broad daylight because it confuses a flash of your camera's lens with the sudden appearance of truck headlights running towards it in short order? distance? Or one that hits the brakes on a highway because a rain storm resembles a concrete wall?

Suppressing these "false positives" is already a factor in some accidents. A self-guided Uber test vehicle in Tempe, Arizona, recently killed a pedestrian who was biking down the street at night, despite his sensors registering his presence. The algorithms that interpret the data supposedly were at fault, presumably confusing the image with the too common "ghosts" captured by similar instruments operating in marginal lighting conditions. But sympathize with the unfortunate engineer who solves this problem by lowering the thresholds of evasive action, only to discover that the updated software takes the passengers on Mr. Toad's Wild Ride to avoid imaginary obstacles lurking in every corner.

Unfortunately, this problem is not easily solved with current technology. A deficiency of current machine learning programs is that they fail in a surprising and decidedly non-human way. A team of students from the Massachusetts Institute of Technology recently demonstrated, for example, how one of Google's advanced image classifiers could easily be tricked into confusing an obvious image of a turtle with a rifle and a guacamole cat. A growing academic literature studies these "examples of confrontation", trying to identify how and why these falsifications, completely obvious to the human eye, can easily deceive computer vision systems.

However, the basic reason is already clear: these sophisticated AI programs do not interpret the world as humans do. In particular, they lack a commonsense understanding of the scenes they are trying to decipher. So, a child holding a crocodile-shaped balloon or a billboard that shows a giant bottle of 3-D beer on a tropical island can make a car that drives alone take evasive measures that spell disaster.

As AI's six-decade history illustrates, addressing these problems will require more than just fine-tuning today's algorithms. During the first 30 years or so, the research focused on bringing logical reasoning ("yes A and B") to its limits, hoping that this approach would prove to be the basis of human intelligence. But it has proven to be inadequate for many of the greatest practical challenges of today. That's why machine learning is trying to take a holistic approach, more like perception than logic. How to get that out, to achieve "artificial general intelligence," is the elusive holy grail of artificial intelligence.

Our dreams of automated drivers and robotic maids may have to wait for a new paradigm to emerge, one that better mimics our distinctive human capacity to integrate new knowledge with old, apply common sense to new situations and exercise sound judgment about What is worth taking risks.

Ironically, Mr. Musk himself is one of the main contributors to the misunderstanding of the current state of the art. His incessant exaggeration of artificial intelligence, often accompanied by tireless warnings about an imminent apocalypse of a robot, only serves to reinforce the narrative that this technology is far more advanced and dangerous than it really is. Therefore, it is understandable that consumers react with horror to the stories of self-driving cars, beating up innocent bystanders or killing their occupants. Such incidents can be seen as the first cases of malevolent machines going crazy, instead of what they really are: product design flaws.

Despite the impression that Jetson-style cars are just around the corner, public acceptance of their failures may prove to be their biggest blip. If advocates of autonomous technologies – from driverless cars to military drones and top-of-the-line robots – are going to present their case to the court of public opinion, they will need more than cold statistics and controlled tests. They must also address the legitimate expectation of consumers that the next generation of AI-driven technology will fail in a reasonable and understandable way.

As the old joke says, to err is human, but you need a computer to really mess things up. Mr. Musk would be happy to hear that message before he shot the messenger.

.


Source link