MENU

MIT, Microsoft model identifies AI ‘blind spots’

MIT, Microsoft model identifies AI ‘blind spots’

Technology News |
By Rich Pell



Such learned actions could result in costly and dangerous errors in real-world situations. The new model, say the researchers, could allow engineers to improve the safety of artificial intelligence (AI) systems – such as driverless vehicles and autonomous robots – by identifying such “blind spots” beforehand.

For example, say the researchers, the AI systems powering driverless cars are trained extensively in virtual simulations to prepare s vehicle for nearly every event on the road. But sometimes the car makes an unexpected error in the real world because an event occurs that should – but doesn’t – alter the car’s behavior.

An example cited by the researchers considers a driverless car that wasn’t trained – and more importantly doesn’t have the necessary sensors – to differentiate between distinctly different scenarios such as between large, white cars and ambulances with red, flashing lights on the road. If an ambulance turns on its sirens, the car may not know to slow down and pull over, because it does not perceive the ambulance as different from just another big white car.

To uncover such training blind spots, the researchers’ model uses human input to closely monitor a trained system’s actions as it acts in the real world, providing feedback when the system makes – or is about to make – a mistake. The original training data is then combined with the human feedback data, and machine-learning techniques are used to produce a model that pinpoints situations where the system most likely needs more information about how to act correctly.

“The model helps autonomous systems better know what they don’t know,” says Ramya Ramakrishnan, a graduate student in the Computer Science and Artificial Intelligence Laboratory and first author of a paper on the study. “Many times, when these systems are deployed, their trained simulations don’t match the real-world setting [and] they could make mistakes, such as getting into accidents. The idea is to use humans to bridge that gap between simulation and the real world, in a safe way, so we can reduce some of those errors.”

The researchers validated their method using video games, with a simulated human correcting the learned path of an on-screen character. The next step, say the researchers, is to incorporate the model with traditional training and testing approaches for autonomous cars and robots with human feedback.

For more, see “Discovering Blind Spots in Reinforcement Learning.

Related articles:
Microsoft acquires AI startup to build ‘brains’ for autonomous systems
New AI tech enables precise learning from imbalanced data
Microsoft AI simulator expands to autonomous car research
Baidu launches ‘no code’ AI model training platform

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s