MENU



The glove, says the researchers, can be used to create high-resolution tactile datasets that could enable an AI system to recognize objects through touch alone. Such information, they say, could be leveraged to help robots identify and manipulate objects, and may aid in prosthetics design.

The low-cost knitted “scalable tactile glove” (STAG) is equipped with about 550 sensors across nearly the entire hand. Each sensor captures pressure signals as the glove interacts with objects in various ways.

A neural network processes the signals to learn a dataset of pressure-signal patterns related to specific objects. That dataset is then used to classify the objects and predict their weights by feel alone, with no visual input needed.

The researchers compiled a dataset using STAG for 26 common objects – including a soda can, scissors, tennis ball, spoon, pen, and mug. Using the dataset, say the researchers, the system predicted the objects’ identities with up to 76% accuracy, and predicted the correct weights of most objects within about 60 grams.

Current sensor-based gloves used today can cost thousands of dollars and often contain only around 50 sensors that capture less information. Conversely, STAG produces very high-resolution data and is made from commercially available materials totaling around $10.

Their tactile sensing system, say the researchers, could be used in combination with traditional computer vision and image-based datasets to give robots a more human-like understanding of interacting with objects.

“Humans can identify and handle objects well because we have tactile feedback,” says Subramanian Sundaram PhD ’18, a former graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “As we touch objects, we feel around and realize what they are. Robots don’t have that rich feedback.”

“We’ve always wanted robots to do what humans can do, like doing the dishes or other chores,” says Sundaram. “If you want robots to do these things, they must be able to manipulate objects really well.”

The researchers also used the dataset to measure the cooperation between regions of the hand during object interactions. For example, when someone uses the middle joint of their index finger, they rarely use their thumb; but the tips of the index and middle fingers always correspond to thumb usage.

“We quantifiably show, for the first time, that, if I’m using one part of my hand, how likely I am to use another part of my hand,” says Sundaram.

Prosthetics manufacturers can potentially use such information to, for example, choose optimal spots for placing pressure sensors and help customize prosthetics to the tasks and objects people regularly interact with.

STAG is laminated with an electrically conductive polymer that changes resistance to applied pressure. Conductive threads were sewn through holes in the conductive polymer film, from fingertips to the base of the palm, overlapping in a way that turned them into pressure sensors. When someone wearing the glove feels, lifts, holds, and drops an object, the sensors record the pressure at each point.

The threads connect from the glove to an external circuit that translates the pressure data into “tactile maps,” which are essentially brief videos of dots growing and shrinking across a graphic of a hand. The dots represent the location of pressure points, and their size represents the force – the bigger the dot, the greater the pressure.

From those maps, the researchers compiled a dataset of about 135,000 video frames from interactions with the 26 objects. A convolutional neural network (CNN) was then designed to associate specific pressure patterns with specific objects. But the trick, say the researchers, was choosing frames from different types of grasps to get a full picture of the object.

The idea was to mimic the way humans can hold an object in a few different ways in order to recognize it, without using their eyesight. Similarly, the CNN chooses up to eight semirandom frames from the video that represent the most dissimilar grasps – say, holding a mug from the bottom, top, and handle.

To maximize the variation between the frames to give the best possible input, it first groups similar frames together, resulting in distinct clusters corresponding to unique grasps. Then, it pulls one frame from each of those clusters, ensuring it has a representative sample. Then the CNN uses the contact patterns it learned in training to predict an object classification from the chosen frames.

For weight estimation, the researchers built a separate dataset of around 11,600 frames from tactile maps of objects being picked up by finger and thumb, held, and dropped. In testing, a single frame was inputted into the CNN, essentially resulting in the CNN picking out the pressure around the hand caused by the object’s weight, and ignoring pressure caused by other factors, such as hand positioning to prevent the object from slipping. Then it calculates the weight based on the appropriate pressures.

The system, say the researchers, could be combined with the sensors already on robot joints that measure torque and force to help them better predict object weight.

For more, see “Learning the signatures of the human grasp using a scalable tactile glove.”

Related articles:
Flexible body sensor detects fine motor movements
‘Smart’ prosthetics monitor for infection, stress
Nanocomposite-coated fiber yields next-gen smart textiles
Electronic skin brings sense of touch to prosthetic users
Skin-inspired flexible tactile sensor for smart prosthetics

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s