Traditional Culture Encyclopedia - Traditional customs - Tesla's obsolete technology, why don't Chinese car companies let it go? Analyzing BEV perception
Tesla's obsolete technology, why don't Chinese car companies let it go? Analyzing BEV perception
With the Tesla HW4.0 hardware, which has no radar hardware at all, declared in the Ministry of Industry and Information Technology (MIIT), Tesla's new generation of pure visual assisted driving hardware and software will also accelerate the landing of the new generation of pure visual assisted driving software and hardware in China. Starting from the late stage of the HW3.0, Tesla began to use occupancy network technology, which puts the vehicle itself in a 3D world, and is able to better dissolve some of the edge case situations of assisted driving.
Tesla FSD Beta has been using occupancy network technology developed from BEVs since last year, after adopting BEV technology since 2021, and this last month has seen a flurry of new domestic car makers and self-driving solution providers starting to get on board with BEV technology.
Welcome in the new NOP + will be completed in the first half of 2023 to BEV perception of the switch to the route, the recently released Xiaopeng P7i city assisted driving features, also integrated the latest BEV technology, in addition to the milli end of the Zhixing, Baidu Apollo, Ideal, and other vendors have released BEV-related recent progress.
BEV, a vision technology program that has been eliminated by Tesla, has been picked up by domestic manufacturers in a short period of time, and why is that? They say Tesla's pure vision is unsafe and insecure, but its vision technology really smells good?
In order to keep hardware costs down, Tesla had to phase out BEV?
BEV's full name is Bird's eye view, which means bird's eye view, and let's take Tesla as an example: the Tesla model uses eight cameras to capture images and fuse them effectively. Of course, this is different from the image stitching of the most common 360° car images, the system directly extracts features by correcting the images from all the cameras and uniformly inputting them into a neural network, and then uses a transformer based on the self-attention mechanism to correlate these features, and then projects them into a vector space, and the previous Tesla, which hasn't yet removed radar, will also add some radar data, which will eventually be used as the basis for the projected image. Some radar data will also be added, and finally get a bird's eye view reflecting the surroundings.
This bird's-eye view is like having a God's-eye view, allowing the vehicle to unify its near-surface perceptions into a single plane, increasing the range and redundancy of perception as much as possible, but there's an inherent bug in this technology, which is that the bird's-eye view is still a 2D image, just like when you're flying in an airplane and looking down as far as you can vertically, and you're not quite able to tell the difference between the highs and lows of a building or a mountain and the surrounding flatness. Buildings or mountains are just as difficult to perceive depth or drop-offs as the flat land around them.
So Tesla models before and after the use of BEV technology, there will still be encountered static objects do not recognize, or phantom braking problems, because although the system can see the object, but still not necessarily be able to identify what this object, or very much rely on the car company in the research and development process in advance of the calibration and classification, to be able to recognize the ground to be able to avoid, can not be recognized and then too late to react, directly hit. It's a very important part of the development of the car.
But only rely on advance learning and calibration, so that the system will always be on crutches, in the face of unexpected situations on the road, such as temporary construction and scattered objects, can not do everything, of course, if if in the category of assisted driving, moderate the driver as the last safeguard, that's still good to talk about, but this technology is still difficult to apply to the automatic driving.
So Tesla has evolved from BEVs to occupancy networks, and while occupancy networks are an extension of BEV technology, the biggest difference is that the system's perception has changed from 2D to 3D.
Tesla models put themselves in 3D space, allowing all obstacles to be revealed as blocks in 3D space, and the system is able to output to the computational unit within 10 milliseconds to output to a computing unit the probability of occupancy for each 3D location around the vehicle, and can predict obstacles that are transiently obscured. Tesla doesn't need to obsess about what the object is or recognize and classify it, it knows whether to hide or not as long as it knows the approximate shape of the object. Of course, we are using Tesla as an example, Mobileye's latest Super Vision also utilizes a similar 2D to 3D technology model, as they are the only two of the mainstream autopilot vendors that still insist on messing with vision solutions.
Ui Xiaoli relies on piles of hardware to make up for technical shortcomings, but in the end it's still Tesla that's right?
So why is Azure turning to Tesla's obsolete BEVs at this stage? Tesla is being phased out precisely because it doesn't have some of the advantages that Azure has, and Azure dares to use the technology precisely because it's stacked high enough on the hardware side.
Whether it's BEV or occupancy networks or the more advanced technology that HW 4.0 may bring, what drives Tesla to keep rolling up to the sky in terms of vision algorithms is that it lacks radar sensors, especially sensors like LIDAR that can scan out 3D space. Tesla's occupancy network is simple to understand, can be seen as in order not to use LiDAR, forcing themselves to take a new route, because the visual plane perception can not have a 3D effect, and can not give the vehicle with 3D glasses, so it can only be rolled in the algorithmic architecture.
And Azure Xiaoli them, as well as the vast majority of domestic automated driving program suppliers, we all chose the LIDAR route, and LIDAR is really able to let everyone take a shortcut, regardless of whether you have a first-mover advantage, with LIDAR, automated driving R & D progress is to accelerate the landing, although the BEV technology can only give a bird's-eye view of the 2D, but LIDAR can give a 3D perception effect, and many manufacturers have been able to give the vehicle with 3D glasses, so they can only roll on the algorithmic architecture. Perception effect, and many manufacturers of LiDAR are arranged in a relatively high position, so that they can have a better field of view, that is, everyone publicized that FOV value. And quite a few models use more than one LiDAR, and all directions can actually be perceived in 3D.
Laser beams from LIDAR can be used to draw a rough image of an object through a point cloud, and some LIDARs with higher equivalent beams are almost capable of a certain amount of imaging capability, and a 3D rendering. The visual advantages of BEV technology can be absorbed, while deficiencies in perceptual accuracy can be made up for by LIDAR or 4D imaging millimeter-wave radar.
And although BEV looks outdated, but when intelligent driving began to enter the city, BEV technology becomes more and more important, a bird's-eye view of the surrounding objects are clearly embodied, but its cost is still very high. Li Want said at a communication meeting some time ago, want to do BEV city auxiliary driving, may need to invest more than 10 billion yuan, so don't think that the automatic driving has been rolled to the head, the money has run out of money to burn, so much money to invest, want to make the new forces to realize profitability in the short term, almost still impossible.
Summary
While using Tesla's obsolete technology, domestic manufacturers should be BEV technology to continue to make it bigger and stronger, because Tesla in order to reduce the cost of the roll of the software, and domestic manufacturers piled up enough hardware and arithmetic, Tesla's weaknesses are made up for, so it is capable of overcoming some of the bottlenecks faced by BEV technology. some of the bottlenecks facing BEV technology.
Of course, there is a problem that can not be bypassed, that is, multi-sensor fusion program perception priority judgment, Tesla is pure vision, do not have to figure out the radar and vision of the perception of the conflict, while other car companies and suppliers still have to face this problem, and if the 2D BEV bird's-eye view of the perception of the 3D LIDAR perception or 4D imaging millimeter-wave radar perception exists, then who should listen to the conflict? conflict, then who should be listened to?
If we want to solve the above problem, we still need to work on visual perception, and the crutch of LiDAR is likely to become a shackle for the future development of autonomous driving.
This article is from the author of the car number of the road curry car, copyright all the author, any form of reproduced please contact the author.
- Previous article:What's the point of tattooing a dragon? Ask god for help.
- Next article:How coal mines are formed
- Related articles
- I want to open a snack bar. What items are better now?
- Why did ancient Europeans use goose feathers as pens?
- What are the characteristics of furniture designed by Chinese villas?
- What are the characteristics of Hainan?
- 2022 China's top ten brands of valves list and domestic valve first-tier brands and top fifty manufacturers specific which?
- Problems and Countermeasures in Primary School Teaching Management
- What are the common western-style seasonings?
- What is the reason why computers will not replace books?
- What do beauty apprentices need to learn?
How about being a beauty apprentice? What did you do at first? How to find a good beauty salon to learn?
It's good to be an apprentice in beauty.
- Tools for Russian black tea