Embedded brain reading enables better human-robot interaction: Page 2 of 3

June 06, 2018 //By Christoph Hammerschmidt
Embedded brain reading enables better human-robot interaction
Buttons and levers are not the right way to control robots in real time. Even voice control may be too clumsy to cause the robot to perform the desired action. The aim of a current research project is to control robots directly through the thoughts of the operator. Thus, the robots should learn to understand and interpret humans.

In embedded brain reading, event-related potentials (ERPs) in the EEG serve as input sources that arise in response to an internal change in state or an external stimulus. At DFKI, these potentials are used to improve the interaction between humans and robots. The scientists Dr. Elsa Andrea Kirchner and Dr. Su Kyoung Kim investigated how ERPs can be detected by single-trial detection in the EEG and what influence different training modalities have. It was shown that ERPs can also be successfully detected under "dual-task" conditions by single-trial detection, i.e. when humans are engaged in not only one but multiple activities. The rarer and more important the stimulus caused by the task, the higher the detection performance. Single-trial ERP recognition is particularly suitable for real-time online EEG analysis, for example for controlling an exoskeleton. In the context of rehabilitative therapy, ERP recognition can provide information not only about planned movements, but also, for example, about a patient's attention state.

Robots learn from mistakes thanks to human negative feedback

Another paper of the team is focusing on the fruitful use of the so-called error-related potential. How this can be used for human-robot interaction is the subject of the paper "Intrinsic interactive reinforcement learning - Using error-related potentials for real world human-robot interaction" published in the journal Scientific Reports - Nature. The scientists of the Robotics Innovation Center and the University of Bremen describe a method of machine learning developed at the DFKI in which a robot can learn from its own misbehaviour in gesture-controlled interaction with humans. At the same time, the robot is able to learn to distinguish the gestures of humans and to assign them to the actions they can perform.


Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.