Inside track
www.autoconnectmagazine.com
Core of the issue
If you've played any computer games or used numerous smartphones, then the name Nvidia should be part of your everyday lexis. The chip company has been powering the games world for many years, allowing software firms to create the most eye-catching, immersive and alluring worlds for gamers to lose themselves in.
Whether it's the firm's top of the line GeForce GTX Titan Z graphics card for Desktop gaming PCs, or the company's Tegra 4i chip that powers smartphones including the LG G2, the chances are you've used a device Nvidia has been part of.
And the success of Nvidia's GPU technology has spread far beyond the realm of the games and mobile spheres.
There are over eight million cars on the road that use Nvidia's GPU processors, powering the screens and dials that flick round when you tap the throttle or fire up the navigation system. Manufacturers as diverse as Tesla, Honda, Audi, Porsche and Lamborghini all integrate Nvidia's chips into their vehicles. And that number will push through 25 million cars as more deals and tie-ups come to fruition.
But infotainment is only part of the Nvidia story, its automotive plans will help us relax when we get behind the wheel of our cars in the future, as the company looks to help car manufacturers bring autonomous cars to the market.
“Nvidia started creating GPUs for graphics and then our GPUs started appearing everywhere, and we've seen our processors being used for graphics in the vehicle, but there are other applications,” says Danny Shapiro, director of automotive at Nvidia.
Last year Nvidia unveiled its Drive PX supercomputer, a system that could be easily integrated into a car, read its surroundings through onboard cameras and other sensors, helping it negotiate them autonomously. That would allow the 'driver' to sit back in their chair and relax while the car took the strain.
The Drive PX system was comprised of two NVIDIA Tegra X1 processors which delivered a combined 2.3 Teraflops, and interfaces for up to 12 cameras as well as radar, lidar, and ultrasonic sensors. Everything you would need to build up a reliable picture of a car's surroundings. This combining of data from different systems is called sensor fusion.
This allows the onboard software to accurately understand the full 360° environment around the car to produce a robust representation, including static and dynamic objects, (parked cars and pedestrians for example). The system also uses something called Deep Neural Networks (DNN) for the detection and classification of objects which can dramatically increase the accuracy of the resulting fused sensor data.
DNN is the ability for a machine to learn through onboard algorithms, helping it to develop, and, in the case of autonomous cars, prevent crashing.
“We now have over 50 car makers, suppliers and research groups doing work with Drive PX; it's pretty incredible how quickly we've reached a critical mass and getting people into developing prototype vehicles and doing real trials,” says Shapiro.
But that was last year. Nvidia has now launched the second-generation Drive PX 2, which takes the system to the next level, and more importantly onto public roads.
Drive PX 2 provides the same processing power as 150 MacBook Pros and is made up of two next-generation Tegra processors plus two next-generation GPUs, based on Nvidia's Pascal architecture. It's capable of delivering up to 24 trillion deep learning operations per second. That's over 10 times more computational horsepower than the previous-generation.
The system's capabilities should enable it to quickly learn how to address the challenges of everyday driving, such as unexpected road debris, erratic drivers and construction zones. Deep learning also addresses numerous problem areas where traditional computer vision techniques are insufficient, such as poor weather conditions like rain, snow and fog, and difficult lighting conditions such as sunrise, sunset and extreme darkness.
Add to this that for general purpose floating point operations, Drive PX 2's GPU architecture is capable of up to 8 trillion operations per second, over four times more than the previous-generation technology, it enables car firms to address the full breadth of autonomous driving algorithms, including sensor fusion, localisation and path planning.
And Volvo will be the first to make use of the technology as it puts 100 fully autonomous vehicles on the roads of Gothenburg, each one having to deal with real traffic conditions and react to other drivers in real time.
But while Nvidia might supply the computational power it'll be down to the individual car companies as to whether autonomous technology is a success.
“We supply the hardware, the software development tools, and then each of our customers go on and create their individual product. I think we're going to see a shift in how they compete. We're going to see the race to offer new features and new self driving modes over time,” says Shapiro.
We're going to see a lot more of Nvidia in the automotive sphere, whether powering infotainment functions or entire vehicles, so it might be wise to check the specifications of your next car for the chip company's name.