DeepMind, Waymo team up to boost AI accuracy in self-driving cars

September 05, 2019 //By Murray Slovick
AI accuracy
It takes a lot of training to refine and improve the artificial-intelligence (AI) algorithms that power self-driving vehicles. Unfortunately, the training process is inherently inefficient as it typically requires either use of a massive amount of computing power to train neural nets in parallel or else researchers have to spend time manually weeding out bad systems.

Recently, however, and with the help of its sister company DeepMind, Waymo reports it has found a way to improve AI algorithms for autonomous driving by making the whole training process faster and more effective.

Neural networks, by way of review, are a set of algorithms modelled loosely after the human brain. The neural network learns through trial and error. It’s presented with a task and receives a grade on whether it performs the task correctly or not based on desired outcomes. The neural net then learns by attempting these tasks again and making corrections without help. The idea is that in future attempts under similar conditions, the neural net will make the necessary adjustments in its weighting factors and become more likely to perform a task correctly.

The overall goal is to find a learning rate that ensures the AI network gets better after each iteration.

Finding the best training regimen (or “hyperparameter schedule” in AI parlance) is commonly achieved in one of two ways: training numerous models in parallel, which is computationally expensive, or having researchers monitor networks while they’re training, periodically culling the weakest performers by hand. Either way, it requires significant resources.

What Waymo and DeepMind are doing is to automate that weeding out process, automatically replacing ineffective neural nets with better-performing ones from the best networks running the task. The companies call the approach Population Based Training (PBT) and are using it for tasks such as pedestrian detection.

In operation, PBT evaluates neural-net models every 15 minutes. In a manner reminiscent of Darwin’s theory of evolution, the nets “compete” with each other to see which ones survive the culling process. If a member of the population is underperforming, it’s replaced with a copy of the better-performing member, with slightly changed hyperparameters.

In a nutshell, losing models are removed from the training pool and training continues using only winning models. PBT uncovers a schedule of hyperparameter settings rather than trying to find a single fixed set to use for the whole course of training. PBT also doesn’t require DeepMind to restart training from scratch, because each winning network inherits the full state of its parent network. The dual process of weeding out underperformers and mutating higher performers is done in parallel.


Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.