your current location is:Home > carHomecar
The carriage turned into a big living person, and the Tesla "ghost" car was blinded again!
This time, Tesla was surrounded by a carriage.
Big truck for a while
a semi truck for a while
The most incredible thing is that it can still recognize people walking in front...
Is it a "ghost" again?
The TikTok video of Tesla not recognizing the carriage went viral, and even Igor Susmelj, co-founder of artificial intelligence software company Light, asked:
I would like to know how many carriages this model has seen during training.
Just a little pony car stumped Tesla.
It is not difficult to see that in terms of recognizing marginalized scenarios, Tesla's automatic driving system (AP), and even fully automatic driving (FSD), are more likely to have fatal accidents while driving.
Fred Lambert, editor-in-chief of Electrek, released Tesla's tests in the Blue Ridge Mountains of the United States yesterday:
Video shows Tesla cars unable to drive in marked lanes. Even more terrifying, it almost led Fred Lambert to a cliff.
It's not once or twice that Tesla has had problems with identification.
Identify people holding traffic signs as traffic pillars.
All kinds of animals are either recognized as adults, or nothing at all...
Identify the moon as a yellow traffic light.
Next, let's talk about Tesla's identification of faults
Can't see white?
Tesla crashes are not rare, but why keep staring at white trucks?
The white in front of you is not white, what kind of car are you talking about.
In March 21, a white Tesla Model Y hit a white semi-trailer truck at an intersection in southwest Detroit, USA.
And this isn't the first time Tesla has collided with a white truck.
As early as 2016, a Tesla Model S in Florida, USA, collided with a turning white semi-trailer in Autopilot, and got under the truck container, killing the Tesla driver.
The real reason turned out to be that Tesla recognized the white as the sky and hit it.
Have you ever seen a moving sky...
Previously, some Zhihu netizens took the following picture for a visual recognition experiment.
Import the white truck image into Photoshop, and use the quick selection tool to try to select the outline of the white truck. The result is this:
There is a large blue sky and white clouds that are checked into the check box at the same time. In Photoshop, the white cargo box is the same as the sky.
The same may be true for Tesla's assisted Autopilot visual recognition system. Good guy, it turns out that Tesla is still "colorblind".
In addition, why does Tesla "pick trucks to hit"?
Then we have to talk about the method of the automatic driving system to separate the moving target.
Considering the real-time performance and cost, the frame difference method is mostly used in the industry at present. This method consumes the least computing resources and is the easiest to achieve real-time performance, but the disadvantage is that the accuracy is not high.
The so-called frame difference method is to detect pixel changes between adjacent frames.
The basic principle of the frame difference method is:
In the moving target video, a series of continuous sequence images can be extracted according to time. In these adjacent sequence images, the pixel changes of the background are small, while the pixel changes of the moving target are large, and the pixel change caused by the movement of the target is poor. Then the moving target can be segmented.
For relatively large moving targets with the same color, such as a large white truck, the inter-frame difference method will "create a hole inside the target, and it is impossible to completely segment and extract the moving target".
The side of some large trucks with high chassis is like white paper, and the machine vision based on deep learning is like a blind man, hitting it directly without slowing down.
Ghost in broad daylight
Before, Tesla's visual recognition system has also caused supernatural events.
When a Tesla owner passed through an uninhabited area, they found that the radar that automatically identified obstacles in the car detected many "human-shaped" objects.
Another netizen posted a video of Tesla walking through the cemetery,
In the video, while the vehicle is driving, the radar on the screen keeps showing that there are many pedestrians passing by the vehicle ahead, but there is no one in front of the vehicle recorded in the video.
Tesla is not actually seeing a "ghost", but an image of an attacking automatic driving assistance system (ADAS) that the vehicle encounters while driving.
This is the Tesla Autopolit pot again.
Teslas that are driving normally on the road will be called "ghost brakes" by some car owners because they mistake various roadside signs (such as the Stop sign in advertisements) for speed limit or stop signs, and then slam on the brakes. .
This ghost car, I really dare not ride.
How to do image recognition
Tesla's entire car is equipped with 8 cameras, 1 millimeter-wave radar, and 12 ultrasonic radars to detect the external environment.
8 cameras are used to recognize objects in reality. The camera can pick up pedestrians, vehicles, animals or other obstacles on the road, etc.
You must know that all 8 cameras capture two-dimensional images, and there is no depth information. Therefore, Tesla outputs a three-dimensional vector space through visual input from 8 different perspectives.
It can be seen that the vector space output by multi-camera fusion is of higher quality, which can help autonomous vehicles perceive the world and locate themselves more accurately.
These include roads, traffic lights, vehicles, and other factors that need to be observed for autonomous driving.
Algorithmically speaking, Tesla's deep learning network is called HydraNet.
The basic algorithm code is shared. The entire HydraNet contains 48 different neural networks. Through these 48 neural networks, 1000 different prediction tensors can be output.
However, there will always be things that the visual system cannot learn.
In the early years, Tesla had cooperated with a third party to outsource data work, but found that the quality of the labeled data was not high, and then expanded its own team.
Most of the initial Tesla annotations were performed on 2D images.
Soon after, the annotation began to be transferred to the 4D space, that is, the 3D space + time dimension, and it was directly annotated in the Vector Space, with a Clip as the minimum annotation unit.
This time, the problem of identifying the carriage was ridiculed, and the carriage had not yet been labeled with data.
The problem is, Musk just fired the data annotator for California’s self-driving unit some time ago.
Tesla's "vision" is simply worrying.
Previous:The earliest Tesla killer died this week
Next:Tesla wins German lawsuit: will continue to mention self-driving features in ads
related articles
Article Comments (0)
- This article has not received comments yet, hurry up and grab the first frame~