29 - 03 - 2024
Login Form



 


Share this post

Submit to FacebookSubmit to TwitterSubmit to LinkedIn

The world’s largest automotive supplier, Bosch, provided a massive stage today for NVIDIA CEO Jen-Hsun Huang to showcase our new AI platform for self-driving cars.

Speaking in the heart of Berlin to several thousand attendees at Bosch Connected World — an annual conference dedicated to the Internet of Things — Huang detailed how deep learning is fueling an AI revolution in the auto industry.

The small AI car supercomputer was unveiled yesterday in the opening keynote address by Bosch CEO Dr. Volkmar Denner, who focused on how his company, which had €73 billion ($77.6 billion) in revenue last year, is pushing deeper into the areas of sensors, software and services.

“I’m so proud to announce that the world’s leading tier-one automotive supplier — the only tier one that supports every car maker in the world — is building an AI car computer for the mass market,” said Huang, speaking in the main theater of the glass-roofed, red-brick exhibition center.

NVIDIA CEO Jen-Hsun Huang and Bosch executive Dr. Dirk Hoheisel with Bosch AI Car Computer
NVIDIA’s Huang and Bosch’s Hoheisel reveal the Bosch AI Car Computer.

“It blows my mind where this industry is going and where this strategy is going,” said Dr. Dirk Hoheisel, who sits on Bosch’s management board, responsible for mobility solutions.

First Adoption of Xavier Technology

The collaboration with Bosch represents the first announced DRIVE PX platform incorporating NVIDIA’s forthcoming Xavier technology. Xavier can process up to 30 trillion deep learning operations a second while drawing just 30 watts of power.

That power is needed to achieve what the automotive industry refers to as “Level 4 autonomy,” where a car can drive on its own, without human intervention. The number of cars with various levels of autonomy will grow to a total of 150 million vehicles by 2025, analysts project.

NVIDIA’s Huang said his company will deliver technology enabling Level 3 autonomous capabilities (in which a car can drive on its own but still needs a driver to intervene under various conditions) by the end of this year, and Level 4 capabilities by the end of 2018.

Huang noted that a wide range of leading brands are working on autonomous solutions — from traditional carmakers like Audi, Ford and BMW, to new competitors like Tesla, and technology innovators like Waymo, Uber and Baidu.

Such vehicles will require unprecedented levels of computing power, due to the profound complexity posed by self-driving. Coded software can’t possibly be written that would anticipate the nearly infinite number of things that can happen along the road, Huang said in his keynote.

Cars that stray from their lanes, objects that fall onto the roadway, rapid shifts in weather conditions, deer that dart across the road. The permutations are endless.

While cars on the road now are capable of detecting vehicles in front of them and braking when needed, the requirements for autonomous driving are dramatically more demanding, Huang said.

Instead, deep learning can enable us to train a car to drive, and ultimately perform far better — and more safely — than any human could do behind the wheel.

“We’ve really supercharged our roadmap to autonomous vehicles,” Huang said. “We’ve dedicated ourselves to build an end-to-end deep learning solution. Nearly everyone using deep learning is using our platform.”

Huang noted that the company’s massive commitment — which started five years ago with thousands of engineering years of effort behind it  — has put NVIDIA at the center of the AI revolution. It’s working with every significant cloud service provider, researchers worldwide and a wide range of corporates in nearly every sector.

Accelerating the AI Pipeline

Deep learning plays a vital role through the entire computational pipeline for a self-driving vehicle enabling it to get increasingly smarter based on experience. This involves:

  • Detection — understanding the world around the vehicle;
  • Localization — using what’s perceived to create a detailed local map;
  • Occupancy grid — building a real-time 3D environment around the vehicle;
  • Path planning — determining how to proceed along the mapped route;
  • Vehicle dynamics — calculating how to drive smoothly

Bosch2https://blogs.nvidia.com/wp-content/uploads/2017/03/Bosch2-300x100.jpg 300w, https://blogs.nvidia.com/wp-content/uploads/2017/03/Bosch2-768x255.jpg 768w, https://blogs.nvidia.com/wp-content/uploads/2017/03/Bosch2-672x223.jpg 672w, https://blogs.nvidia.com/wp-content/uploads/2017/03/Bosch2-676x224.jpg 676w, https://blogs.nvidia.com/wp-content/uploads/2017/03/Bosch2-328x109.jpg 328w">

Consider the processing horsepower required to make sense of the ocean of data that streams in from a car’s array of sensors, including cameras, radar, lidar and ultrasonics. This is where deep learning comes in. By first developing and training a deep neural network in the data center, the NVIDIA DRIVE PX system becomes able to understand everything happening around the car in real time.

From Cloud to Car

Companies are using the power of the GPU in the cloud as well. NVIDIA HGX-1 is the new AI supercomputer standard, which is designed for deep learning in the data center, and for use across all major industries.

AI Car Revolution

Many cars on the road today have some basic safety features, known as advanced driver assistance systems (ADAS). These systems are often based on smart cameras and offer basic detection of obstacles and identification of lane markings. These capabilities can help carmakers increase their New Car Assessment Program safety ratings.

While a stepping stone to making cars safer, ADAS systems are a long way from a self-driving car. And the amount of processing required for an autonomous vehicle is orders of magnitude greater. Huang noted the incremental amount of processing to be at least 50 times greater.

And that doesn’t include the addition of an AI co-pilot. Introduced at CES two months ago, NVIDIA’s AI co-pilot technology will act as an AI assistant in the vehicle, as well as provide safety alerts of potential hazards outside the car. By monitoring the driver as well as a full 360 degrees around the car, the system works to keep the occupants of the vehicle safe.

“Of course, our goal someday is that every single car will be autonomous,” Huang said. “But for the path to then, we’ll have AI that will be your co-pilot, will be your guardian, and look out for you.”

Powered by deep learning, AI co-pilot can recognize faces to automatically set specific preferences in the car depending on the driver. The system can also see where the driver is looking, and detect expressions to understand the driver’s state of mind. Combining this information with what is happening around the car enables the AI co-pilot to warn the driver of unseen potential hazards.

In addition, the system has the ability to read lips. So even if the radio is cranked up, the car can understand a driver’s instructions.