MENU

Self-learning robot a step closer to machine self-awareness

Self-learning robot a step closer to machine self-awareness

Technology News |
By Rich Pell



Initially, say the researchers, their robot does not know what its shape is. However, after a brief period of “babbling,” and within about a day of intensive computing, their robot creates a self-simulation and then is able to use it to perform tasks and detect self-damage.

This, say the researchers, is not unlike how humans are able to imagine themselves, and acquire and adapt their self-image over their lifetime. Until now, most robots still learn using human-provided simulators and models, or by laborious, time-consuming trial and error.

“But if we want robots to become independent,” says Hod Lipson, professor of mechanical engineering, and director of the Creative Machines lab, “to adapt quickly to scenarios unforeseen by their creators, then it’s essential that they learn to simulate themselves.”

For their study, the researchers used a four-degree-of-freedom articulated robotic arm. Initially, it moved randomly and collected approximately one thousand trajectories, each comprising one hundred points. The robot then used deep learning – a machine learning method based on learning data representations as opposed to task-specific algorithms – to create a self-model.

The first self-models, say the researchers, were quite inaccurate, and the robot did not know what it was, or how its joints were connected. But after less than 35 hours of training, the self-model became consistent with the physical robot to within about four centimeters.

The self-model performed a pick-and-place task in a closed-loop system that enabled the robot to recalibrate its original position between each step along the trajectory based entirely on the internal self-model. With the closed-loop control, the robot was able to grasp objects at specific locations on the ground and deposit them into a receptacle with 100% success.

Even in an open-loop system, which involves performing a task based entirely on the internal self-model, without any external feedback, the robot was able to complete the pick-and-place task with a 44% success rate, say the researchers.

“That’s like trying to pick up a glass of water with your eyes closed,” says PhD student Robert Kwiatkowski, a lead author of a paper on the study, “a process difficult even for humans.”

The self-modeling robot was also used for other tasks, such as writing text using a marker. In addition, to test whether the self-model could detect damage to itself, the researchers 3D-printed a deformed part to simulate damage and the robot was able to detect the change and re-train its self-model. The new self-model enabled the robot to resume its pick-and-place tasks with little loss of performance.

Such self-imaging, say the researchers, is key to enabling robots to move away from the confinements of so-called “narrow-AI” toward more general abilities.

“This is perhaps what a newborn child does in its crib, as it learns what it is,” says Lipson. “We conjecture that this advantage may have also been the evolutionary origin of self-awareness in humans. While our robot’s ability to imagine itself is still crude compared to humans, we believe that this ability is on the path to machine self-awareness.”

In addition, say the researchers, robotics and AI may offer a fresh window into the age-old puzzle of consciousness.

“Philosophers, psychologists, and cognitive scientists have been pondering the nature self-awareness for millennia, but have made relatively little progress,” says Lipson. “We still cloak our lack of understanding with subjective terms like ‘canvas of reality,’ but robots now force us to translate these vague notions into concrete algorithms and mechanisms.”

The researchers are now exploring whether robots can model not just their own bodies, but also their own minds – in other words, whether robots can “think about thinking.”

For more, see “Task-agnostic self-modeling machines.

Related articles:
AI system lets robots teach themselves to see
AI technique helps robots learn by observing humans
Embedded brain reading enables better human-robot interaction
Nvidia robotics platform heralds ‘new era’ of autonomous machines

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s