Proprioception is necessary for a soft robot: Only if it knows its current shape and configuration in three dimensions will it properly interact with its environment.
Typically, gathering proprioception information for a robot (giving the machine a sense of the relative position of its own articulated parts) is done through active three-degree-of-freedom mechanisms combined with closed-loop control. For soft robotics, the trend in literature it to embed strain and pressure sensors along neutral bending axes of a limb to detect its curvature and touch events.
But this approach results in discrete pressure and bending data at certain points and along certain axes, which according to researchers from Cornell University limits the information that they can give about a robot’s configuration. Of course, more data points can be gathered by integrating more sensors, but this increases system complexity.
In a paper titled “Soft optoelectronic sensory foams with proprioception” published in the Science Robotics journal, the researchers opted to remove all forms of discrete pressure and strain sensors and instead embedded an array of flexible optical fibres in the base layer of an elastomeric foam robotic limb (about the size of a finger for their experiments).
Each optical fibre terminated to exit the base layer and illuminate the bulk of the foam internally. In the research setup, the fibres not only both illuminated the foam, they were used to detect the diffuse reflected light within the limb (through a beam splitter and camera external to the limb).
First, the researchers bent and twisted the foam to known angles and recorded the intensity of the diffuse reflected light leaving each fibre. Then by applying machine learning techniques to the data, they were able to produce models to predict the foam’s deformation state from the internally reflected light.
Using machine learning instead of deriving a theoretical model proved much simpler, and the results were impressive. In their experiments, real-time diffuse reflected light data could be interpreted by AI algorithms to tell whether the foam was twisted clockwise, twisted counter-clockwise, bent up, or bent down. The model was able to predict the type of deformation with 100% accuracy and the magnitude of the deformation with a mean absolute error of 0.06°.
This is to be compared with our own biological proprioceptive capabilities. The article cites studies reporting that wrist, finger, and elbow joint angle absolute errors lie between 1° and 12°, with the proprioception of the proximal interphalangeal joint angle limited to between 4° and 9°. For a human index finger (about 80mm in length), these proprioception errors translate into a fingertip position error of 3 to 6mm. In comparison, the mean bend error reported for the soft robotic limb corresponded to an error of about 2mm in the position of the sensor’s movable end.
Although the external light measurement setup was too bulky for integration in a robot’s arm, the authors are confident that the large illuminator and camera with beam splitter could be replaced by light-emitting diodes and photodiodes, while the beam splitter setup could be miniaturized or removed if the number of embedded fibres were doubled.
Here, machine learning shows it could simply sort out a soft robot’s proprioception for reliable 3D control and response to an external stimuli.
Cornell University – www.cornell.edu