Joke Collection Website - Talk about mood - The era of data-driven autopilot 3.0 has arrived.

The era of data-driven autopilot 3.0 has arrived.

Easy Car Original mentioned assisted driving in the past. You may think of Tesla, Huawei or Tucki first, but now you may need to remember a new player who came from behind, that is Zhixing. This company was hatched from the intelligent driving front end of Great Wall Motor. After the establishment of 1020, Millicent showed us an amazing report card.

On the 6th Mo Hao AI Day held in September 13, Millie once again showed us their latest achievements and proud achievements: Millie ran the fastest automatic driving time in China 1000 days in 1000 days, ranking "the first in mass production of automatic driving in China", and delivered three generations of passenger car driving assistance products stably for two and a half years. At present, it has carried more than ten yuan. Wemocha PHEV and Euler Good Cat equipped with Hmong HPilot won the "five-star safety rating" of EU E-NCAP, making Hmong the first self-driving company in China to go to sea for mass production. In the automatic distribution of terminal logistics, it occupies a leading market share in this field, and Little Magic Camel 2.0 is delivered to customers in mass production. MANA, the first data intelligence system in China, built by MM, has completed the labeling of hundreds of thousands of all-factor, multi-modal segments, accumulated 3 million hours of China road driving cognitive scene database, equivalent to 40,000 years of human drivers, and basically completed the data closed loop. ...

It's hard to believe that this is a company established only two and a half years ago. Then next, we will take you through the dry goods sharing of this AI Day activity to understand how Millie made such rapid progress and remarkable achievements.

0 1 "big model+big data"? Finally, sprint to the era of autopilot 3.0.

At this event, Dr. Gu Hao Wei, CEO of Millimeter Star, delivered a speech with the theme of "Millimeter and the 3.0 era of autonomous driving", and put forward the industry judgment that "autonomous driving has entered the 3.0 era of data driving" for the first time in the industry.

So what is the evolution of autonomous driving? 1.0, 2.0, 3.0 What's the difference?

First of all, in the era of 1.0, hardware is still the main driving force: the perception ability mainly depends on laser radar, the cognitive mode depends on artificial rules, the cost of the whole vehicle is high, and the automatic driving mileage is about 1 10,000 kilometers;

Secondly, the era of 2.0 is mainly software-driven: the sensing mode has changed from lidar to multi-sensor single output result, and the fusion mode is not perfect. And the training mode is still a small model with little data, and the cognitive mode is still dominated by artificial rules. The mileage scale of autonomous driving has risen to 65.438+0 million to 65.438+0 billion kilometers.

Finally, the era of data-driven autopilot 3.0 is the direction of sprint: multi-sensor fusion output results are realized in perceptual mode, and cognitive evolution is to explain the common sense of scene driving. The training mode has reached the volume of big model and big data, and the automatic driving mileage has also increased to more than 1 100 million kilometers. Millicent has been preparing for the autopilot 3.0 era. Perception, cognition and pattern construction are all built in a data-driven way. Everything we do is to do a good job in data channels and computing centers, so that we can obtain data more efficiently and turn data into knowledge. At present, Tesla has led the world into the era of autonomous driving 3.0, and Millicent is most likely to become the first company in China to enter the era of autonomous driving 3.0.

Gu said that as a new trend of current AI development, the attention model has brought opportunities and challenges and has become one of the important driving factors in the era of autonomous driving 3.0. The biggest feature of attention is its simple structure, which can infinitely superimpose the basic units to obtain a huge parametric model. With the increase of parameters and the improvement of training methods, the effect of large model has exceeded the average level of human beings in many NLP tasks. However, the big model of attention is also facing a great challenge, that is, because its demand for computing power far exceeds Moore's Law, the training cost of the big model is very high, and it is very difficult to land on the terminal equipment.

The opportunities and challenges brought by attention mode are promoting the technological changes in the autonomous driving industry. "Millimeter is reducing the cost of autonomous driving through low-carbon supercomputing. By improving the design of the car-end model and chip, the car-end landing of the big model is realized, and the big model is more effective through the organization of data. " Gu said that at the data level, based on the big model of attention, autonomous driving requires large-scale and diversified training data, while passenger car assisted driving based on large-scale real human driving data has the ability to accumulate sufficient scale and diversified data. I believe that assisted driving is the only way for autonomous driving. Because only assisted driving can collect data of sufficient scale and diversity. It is reported that after nearly three years of development, Millie is now the first mass-produced autonomous driving company in China. At present, the mileage of user-assisted driving is close to170,000 km, and the data scale continues to grow rapidly.

On the low-carbon supercomputing level, Millie officially announced the establishment of the first supercomputing center of China Autopilot Technology Company on this AI Day. Gu Hao Wei said: "How to improve training efficiency, reduce training costs and realize low-carbon computing is the key threshold for autonomous driving to enter thousands of households." The goal of Millimeter Supercomputing Center is to meet the large model with hundreds of billions of parameters. The training data scale is 6,543.8+0,000 clips, and the overall training cost is reduced by 200 times.

On the level of algorithm model, Gu introduced that as early as June 20021year, the research and landing attempt of large-scale transformer model began. It is based on the successful practice of training platform transformation and upgrading, data specification and labeling method switching preparation, and model details exploration for specific tasks of perception and cognition in the past year, which has laid a solid foundation for the rapid development of urban navigation-assisted driving scenes.

02 ? Mana is upgraded in all directions to help drive into the city.

Urban navigation assisted driving scene is the core breakthrough point of automatic driving function at present, and it is also a battleground for military strategists. However, from the high-speed scene with single road and traffic conditions to the urban scene with many traffic participants and extremely complicated road and traffic conditions, the technical difficulty faced by the automatic driving system can be said to be multiplied. Huge challenges have also held back the pace of many autonomous driving manufacturers "entering the city" and can only continue to fight for technological breakthroughs. As early as the end of 20021,Mimo won the battle of assisted driving urban scene, and took the lead in opening the technical exploration journey in the field of urban assisted driving. Now, the Mimo data intelligence system MANA is welcoming a series of milestone upgrade iterations.

Gu said that there are "four scenarios and six technical challenges" in urban roads. Among them, the scene problems mainly include "urban road maintenance", "dense large vehicles", "narrow lane changing space" and "diverse urban environment". To solve the above-mentioned scene problems, the technical level faces six challenges: 1, how to transform the data scale into model effect more efficiently, how to make the data play a greater role, how to use re-perception technology to solve the problem of understanding the real space, how to use the interactive interface of the human world, how to make the simulation more realistic, and how to make the autopilot system move more like a human.

In order to meet the above challenges, MANA's perceptual intelligence and cognitive intelligence have been updated and upgraded.

Firstly, MANA uses the self-supervised learning method of unlabeled data of production vehicles to construct the model effect. Compared with training with only a small number of labeled samples, the training effect is improved by more than 3 times, which makes the advantage of nano-data effectively transformed into model effect, thus better adapting to the needs of various sensing tasks of autonomous driving.

Secondly, MANA's perception ability is improved, so that massive data is no longer treated differently. Facing the problem of "data efficiency" under the huge data scale, MANA built an incremental learning and training platform to extract some stock data and combine them with new data to form a mixed data set. In training, the output of the new model and the old model should be as consistent as possible, and the new data should be fitted as well as possible. Compared with the traditional method, the overall computing power is saved by 80% and the response speed is improved by 6 times.

Third, the power perception is stronger. Virtual real-time mapping in BEV space by using time series converter model makes the output of perceived lane lines more accurate and stable, and makes urban navigation autopilot bid farewell to high-precision map dependence.

Fourthly, MANA's perception ability is more accurate, so that there are no unrecognizable traffic lights in China. By upgrading the on-board sensing system, MANA specially recognizes the status of brake lights and turn signals, making drivers safer and more comfortable when dealing with sudden braking and emergency overtaking.

Fifth, Mana's cognitive ability has evolved again. Facing the most complex scene in the city-intersection, MANA introduced the high-value real traffic flow scene into the simulation system, and cooperated with Deqing and Alibaba Cloud in Zhejiang Province to introduce the intersection into the simulation engine to build an automatic driving scene library. Through the real simulation of autonomous driving, the timeliness is higher and the microscopic traffic flow is more realistic, which effectively solves the "long-standing problem" of urban intersection traffic.

Finally, MANA cognitive intelligence has ushered in a new stage. Through in-depth understanding of the massive human driving covering the whole country, learning common sense and anthropomorphic actions, the decision of assisting driving is more like the actual driving behavior of human beings, and the optimal route can be selected in combination with the actual situation to ensure safety, and the body feels more like an old driver.

The re-evolution of mana cleared the biggest obstacle on the road to Nuocheng. "NOH City is a navigation-assisted driving, better understanding the road conditions in China." Gu said that NOH, a city at the end of the century, adopted the technical route of "paying more attention to perception, less attention to maps and more attention to calculation". With the empowerment of MANA, it has five highlights: intelligent traffic light recognition, intelligent left-right turning, intelligent lane change, intelligent obstacle avoidance-static and intelligent obstacle avoidance-dynamic. In addition, the "Intelligent Traffic Flow Processing" function will also be officially released.

It is conceivable that future assisted driving can be used not only in high-speed scenes, but also in scenes where we commute every day, which will greatly alleviate our travel fatigue and improve our driving comfort. Personally, I really look forward to it.

"Re-perception of light map" will become the future industry trend.

Nowadays, many car companies are also doing urban assisted driving, such as Tesla, Huawei, Tucki, etc. It seems that the route chosen by the company is closer to Tesla, that is, it pays more attention to the first principle and relies on the intelligence of the vehicle itself to realize various assisted driving functions. Next, let's take a look at the specific technical routes and completion effects of these companies.

Start with Tesla. On a global scale, Tesla can be said to have the fastest research and development technology and the fastest mass production speed in assisted driving technology. As early as last year, Tesla FSD already supported the advanced assisted driving function in urban areas. After continuous iteration, it realized the zero-takeover assisted driving using an American user from the east coast to the west coast.

However, in China, due to data security and other issues, the update progress of Tesla FSD cannot be synchronized with that of the United States, which makes it difficult for domestic Tesla consumers to enjoy the same experience as the overseas version after paying. On the other hand, FSD's adaptability to the domestic driving environment and consumers' driving habits is still slightly lacking, which limits consumers' expectations of Tesla's assisted driving system.

Compared with Tesla's near stagnation in China, Huawei's progress is lightning fast. At the beginning of May, Huawei took the lead in launching the Huawei HI version of Extreme Focus Alpha S equipped with Huawei's intelligent driving solution. It is equipped with driving assistance hardware consisting of three solid-state laser radars, six millimeter-wave radars and 1 1 high-definition cameras. The main control chip comes from Huawei's MDC 8 10 computing platform, and its computing power is 400TOPS. In addition, Aouita 1 1, which is very popular recently, has also performed well with the support of Huawei's full-stack intelligent vehicle solutions. Huawei HI Edition of Extreme Focus Alpha S can realize the functions of active car following, active lane changing, lane keeping of large curvature ramp, pedestrian avoidance and so on. The overall performance is really good, but there are also some shortcomings. Let's talk about it later.

As one of the earliest and most famous new forces in China, Tucki has also made great efforts in assisted driving technology. It is reported that the city NGP version of Tucki P5 is equipped with driving assistance hardware consisting of two laser radars, five millimeter-wave radars, 12 ultrasonic radar and 13 camera, with a computing power of 30TOPS. In the actual measurement, 65,438+080 degree U-turn, traffic light recognition, lane change detour, extreme weather/special situation response, unprotected left turn and response detection of large urban vehicles have been realized.

Although Huawei and Tucki